 So, could you give a warm welcome of applause to Stephen Grunke, who will be talking to you in one minute. So, hi everybody. My name is Stephen Grunke. I'm a software developer since about 15 years, working in sole projects and larger teams and smaller teams. So, mostly my development stack was JavaScript, and you will find some of the tools that I mentioned coming from this world, but I'm very sure you can also find something for your project that applies here. Here's my email address, my PGP key, and my favorite social network account. Yeah, so a little spoiler what will happen today. I will talk about development process exploitation. So that means if you are developing your software and somebody joins your team and sends you code for a review, it could happen that it executes code in your machine without your knowledge. There are a few things that are really hard to catch, or I find hard to catch and I want to share. Maybe you have the same problems and you find that the same mitigation supply for your project as well. Yeah, I will then continue, and let's start with the software development process. That's a small cycle. So, first of all, it starts with an operating system. You need to have a computer to write the software, and that's something you need to trust first off. If you... Yes, so your operating system contains keys and credentials. It contains the source code you want to develop and your tools that you have in place. And the major risk is that the tools are vulnerable to some exploitation or that your host is already compromised and you write the software committed to your coworkers, and it isn't what you intended to write. That's a larger problem here. After you start writing code, the editor is kind of the interface that I have to write the files and edit the code. I find it kind of complex to use an editor. On the left you can see that many of the editors come with a package manager included, which is a good sign for the complexity that these tools have. I don't know what tools you need, but they support you in development, so it's very good to, for example, have code linters and auto-completion in place to write better code. At the same time, it can be a problem because they can execute code unattendedly. We will see in a moment. The mitigation I came up with for the editor part is that you have a virtualized environment where you run your editor, so when something happens and it is compromised, not your root system is also compromised as well. You want to monitor all your config files that you have in the project and you want to get awareness of what exactly happens on my system when I run the due this code. Yeah. The next part you will probably use as a shell integration, so as soon as you open your repository, some of the shells I saw just tell you what branch you're working in, what files were changed, and so on. But it's something that comes very neat if you're developing, but it can be a risk as well. Yeah. So my opinion on the shell integrations is mostly that it's made for software development on your own system, so when you write the code and you can trust it, it's not a problem to use these tools at all, but as soon as you get sources from foreign developers, it can be a problem. So choose your tools wisely and don't execute code from others if possible. The versioning system that you commit your code to is also a very good choice. For example, Git can execute hooks on different occasions. For example, when you check out new code, if you commit and so on, that means if you manage to clone a repository and a Git folder is included or an HG folder is included, it could mean that your operating system decides to execute whatever is in the hooks. It's not possible to store a .git folder within the Git repository, but it's possible to store it in a mercurial repository or in the SVN or something, and then your shell integration won't know what the original source was and will execute it anyway. The second thing that was introduced, for example, from Visual Studio Code this October is that they now support Git hooks, which is a great feature, right? The mitigations against this are pretty easy. You can either set a different hooks path, which is not within this project repository so that you don't execute Git hooks at all, or you can use that little wrapper here that you see to, for example, check at least that there is no file that is a Git hook within that folder or you execute Git. It's a very good choice if you want to protect yourself from that vulnerability. So after you committed the code and shared it to the versioning server, you probably are going to build it automatically. So some services like Travis CI will run it for you, so they will run tests, they will compile the software, and also they do the package versioning and deployment to some other places. It becomes a problem if you can't reproduce the results from your build runner because it's a system you don't control sometimes, and as soon as you get the binary result from it, if you compile the software that compiles the binary, you need to check that result somehow because somebody could have authored it without your knowledge and then you will ship it to your users. Also a problem on many of these build workers is you want to have this process very fast, so that means you don't want to wait until all the dependencies are installed and the great service is that you have caching in between these projects. This means that, for example, if somebody managed to inject the version to the cache of some CI system, then it will eventually show up in other projects as well and you can pivot across the projects. Usually if you have a build environment, it has access to some kind of development key. Mostly if you get pull requests from external, the keys are stored encrypted and you don't have access to them, but as soon as somebody has right access to your repository, also the keys could be leaked. Let's make an example. You have somebody offering your software and you don't give permission to edit the master branch of the repository, but as soon as you open a branch anywhere and make a pull request, Travis CI or other build runners will use that and decrypt the passwords for you and give you access to the credentials, which you can then print or do whatever you intend to. For me, the best option here would be to have reproducible builds because then you can use different of the build workers and compare the results somehow so that you see if one gets compromised, the other two will tell you, hey, that's a different result, have a look please. That would be great. Also, the build steps, I mentioned building, testing, and packaging the software are totally different steps, so what you can do is you can have one compartment per step so that you can have it at a final level and see what happens here. After you compile the software, you build the software, you need to ship it to the user somehow. So either you store it in your own server or most often you use a CDN, you just put it there and it's the asset that's lying around. Your users will come around, download it from here and execute it. So what is the problem here? The problem is that if you have a URL, it's very hard to prove that it's actually from the real maintainer. If you call your account like a different project, then people won't be able to notice the difference somehow. What you can do to mitigate this is to publish the URLs that you're literally using and also sign your assets so that users can check is that something that the developer intended to give me or is it something that is really not intended? And the next part is you need to reach out to your users so you make people aware that there is a project they can check out, they can clone and usually you have the package registries. A few slides back you saw that the package managers are also included in the editors. So that's also something where you can ship the software but the package manager I was mostly looking at was, for example, NPM. There was an interesting occasion where somebody had a project called Kick. The company Kick then tried to take it down and the person just ignored it for the moment but then Kick reached out to NPM directly and they deleted the repository. In consequence, the developer removed all his projects from the versioning server and a few hours later they were showed up with the same project names. So that means if you have a software that uses that dependencies and somebody freed up the names, it would affect your repository as well and compromise it. That's something that needs mitigation. I think the best idea here is to not only identify the project by a unique identifier but also have a GUID or a unique identifier per project that does not change so that you can make a difference. That's something that's up to the package registries to implement. That's not something we can do as a user but it's a very common case that this package is fluctuate. For example, if somebody deletes it, you don't have a backup of that. A very good idea is also to store offline backups of every package that you check out and that you install to your software because it's very bad if you want to maintain your software and you figure out there's something missing in your country cover because it's deleted. Software developers have some needs during their work. I want my tooling to perform. If my code editor, for example, is in the VM and the VM is slow, that's something that's annoying all over the process. On the other hand, the velocity is something that your manager will require from you if you write commercial software or you try to get something done and you can't spend all day to work on chores and improve your repository, the versioning and so on. That's something you need to deal with. Another big factor for me is the reliability. As soon as your software goes down and you are on holiday or something, anybody else from the company or from your team should be able to recover what was there before, also known as the bus factor. If you have convenience, like for example, Ruby on Rails gives you a very good, very easy start in the project and that's something you don't want to break by making it too complicated with the development environment. Also something I found to be more annoying than helpful is if you want to pair program and you have a very compartmental environment, it's very hard to share the resources that you need to talk about with other developers, expecting you're not in the same room but working remotely, what is for me most often the case. Yes. Yeah. A large problem that I saw is if you underhand somebody code, if you go ahead and check out code from any online resources, it's sometimes very hard to tell if the code that you see in your, for example, GitHub is what you really would expect to see. I have some examples here which can show how this could work and how this could look like if somebody tries to inject code to your repository that you don't see. First of all, let's start with something easy that's phishing. What you see here on the slide on the left side, maybe you see the cursor, that's not the full path that is just the domain name. The slashes in here are UTF-8 characters so that thing here resolves to a host name and if you control this host, you can get a certificate for it and then the example below, you see how it would look like if you install it. First, I have a host that's just running a web server on port 80 so that you can see the result. Okay, I was cheating a little bit. I was putting the domain in the ETC host so that I don't have to buy it for just showing it. It's strange that .zip is a domain, actually. But then if you install it, you would see that, yeah, you can send somebody a very nice-looking link which looks like a totally different project but it's pointing to your server instead. And I found many of the package managers having the nice feature of executing postscript hooks so that means if you have installed it, it will run some commands afterwards for you. Yeah. So then there is invisible code. If you go online somewhere, find in the forum or in a blog, you find an article and see, that code is actually solving my problem. You go ahead and copy-paste it. So on the left, you see the source code, how this would look like in HTML for the blog. On the right, that's the result. So you can go ahead, you can copy-paste from it and if you paste it to a text area, you will see that the result is something that you didn't expect. For example, if you copy a large chunk of code, you won't go ahead and review it on your local system again and that could be the compromise for your project. So another example, here, is you can use ASCII characters, the control characters to influence the output in your terminal. So if your terminal also supports the legacy of ASCII control characters, you can use that to just revert the line and override it with something you wouldn't expect. What you see on top here, that harmless script is the file. It's a little larger than you would expect for just the eco-foo, but not something you would notice when you just see it. Looking at it from a hex editor, you can see that there is something more going on than just the foo. And if you actually execute it, it will not print something, it will create a pwned text, which is a good example for you that your host was compromised in this moment. Another example I found online, so credit to area for this. So there is a byte sequence you can use so that this even works in a good diff. So when you're working exclusively in your terminal and you're not doing reviews on GitHub or some graphical tool, it could be the case that you don't notice what was going on. What you can see here on the left is I created an empty repository. I added a small script and in the next step down here, I added some improvement to the script, which is actually the malicious commit that's here in red. Afterwards, I just run a git diff on the code and I see that there is only no backdoor. Oh, sorry, that should be okay in the updated slides. So you don't see the evil.sh that it's executed as well if you run it. That's something I consider very dangerous. Yeah. So, some mitigations. The best thing you can do is to make it expensive for your attackers to compromise or try to. So as soon as you have the chance to notice what is going on, also retrospectively, you can at least burn the capabilities and tell others how your project was attempted to compromise. And that's something that is, in my opinion, the best mitigation against this complexity. What you can also do is you can test your software from external services directly, which will tell you if some compromise happened. For example, Git has newly introduced, they will check your packages, the dependencies, and will warn you about some dependencies that are commonly known. The best thing you can do on your local system is to build small compartments so that if some compromise happens, it doesn't affect your full host. Also, not all your projects that you have access to. And it's very important that you have backups on a different system than the host you're working on. So if the compromise happens, you still have access to the original data and can compare it and do some forensics on this. Yeah, so the intrusion detection in forensics, there are some great tools available. For example, my favorites are Detrace and OpenSnoop. You can monitor changes and access on the file system or on your system at all. And you can, for example, set some routes for your projects that are specifically matching. So I'm not going to share some routes that match for all projects. But you will figure out what is, for example, important. Very good starters, for example, to OpenSnoop for ETC Passweedy. If there was some access, then you can, for example, say that it's not something what my software would do. And again, it's very important to have the backups of this because in the moment where you execute it, you can't trust your host at all. The idea how to achieve this is if you have a VM per project, for example, you let it run for half a year, you don't approve the situation. Instead of having one system that you need to update the software to, you need to update afterwards all the projects that you're working on frequently and that's something that's easy to forget, so it's dangerous. If you assume that every time you run some command or every time you work in a project, you spin up a new server entirely from scratch, install the dependencies and so on, that's something that's not a risk for you. Also, if you have, for example, a virtualized server environment, you can have memory dumps at any time. You can monitor the network and you can also diff the file system. For example, you stop the server and just compare it to previous version and see, hey, here's something that was changed that I didn't plan. It's great to know. Yeah. Very important is also to separate your accounts. For example, if you see large GitHub accounts, people are making contributions every day since the years. So it shows that the people have access to many projects from the same machine. So, and the permission model from GitHub, for example, allows you to store an SSH key for write access, but it automatically has access to all the repositories you control. So the best idea you can have here is to make a project, to make a new GitHub account for, or to make a new account on that versioning system that only has exclusively write access to that single repository. So when you work in your compartmented system and you want to upload or pull changes, you don't, you can't influence other repositories. That means compromise doesn't spread across all your projects and so on, which would be an invitation for malware somehow, or ransomware. And you get a better permission model if you create a GitHub organization. In this case, you can also limit your own access in a better way. So my recommendation is not to work in your personal GitHub account, but create an organization for your project. So something many projects are missing are defined responsible persons for security and to clearly communicate what is the plan for incident response. A small example, if you have a new project and you find a vulnerability you would like to commit it, but you don't open an issue publicly because then every user would be affected. You try to reach out to some developers and if you don't have any clue how to securely achieve this that can get you into trouble. And there are quite a few projects which don't communicate this and some of them don't even respond to their security at email address which is bad. Okay. And in this case, I told you what I saw from my experiences working on the projects. So that's basically my summary of what can be harmful and what can be good for your project. Thank you. And we now have time for Q&A. In the room you can line up behind the microphones and I can see we have a question from the internet already. What about Git signed comments? Any thoughts on that? So as soon as you have signed comments and I find that you also email with the same PGP key, it's very interesting that you have the PGP key on the same host probably then you have to get executable. So if somebody executes Git hooks they can steal your PGP keys from this. I didn't find any tutorial online which explains you how to make it manually so that you don't use the Git for signing the commits but I think it can be very good to sign the commits but it can be also dangerous because your email communication can be compromised. Microphone number 4? In the Git diff you showed us there were some control characters. I think Git diff pipes to less by default so shouldn't they appear there somewhere? No they don't. I just checked with the latest version today so that's something that we can also click on the block and see if there is the video available. Yeah it's very hard to show from my HTML slide how this works so this video animation maybe you can touch it a bit. That's how it would work. So most often yes if you pipe to less or you use a hex editor to review then you would notice yes. I somehow remember that maybe it only shows for longer diffs but I think when I type Git diff I can scroll around. That's interesting okay? I need to try. Microphone number 1? You mentioned Travis having access to hidden variables and you being able to leak those variables during pull requests what are your suggestions to mitigate that? Don't give people right access to your repository not even to branches that you don't trust so as soon as they have right access they would also know the secrets behind the variables in this case. I like the security model because if you for example get contributions from outside nobody can trigger that and steal your keys. But as soon as you build it on your own branch somewhere in the repository that changes. Yes but if you submit a pull request you don't necessarily have to have right access to that repository. Yes that's what I mean. If you come from outside and it's not within the same repository the secrets are not decrypted so you can't run the steps for example you would not like to deploy directly from a foreign branch somewhere. We have a question from microphone number 4. You mentioned the problem with different compartments and how to exchange those environments with other people. I think that problem has already been solved with Vagrant and some kind of provisioning software like Ansible. Do you have any experience with checking those results of Vagrant boxes that are automatically provisioned like having some service back software to check those environments afterwards so having some kind of hashing and how to find out if they have been reproduced the same way and or if they have been any exploit used to in that process of setting up the Vagrant environment. Yes so different levels you can look at this there was let me try to find it. You can for example memory dump at any time if you have the host trying somewhere or was your question exactly that you want to check if your environment that was spawned up was not compromised yet? Yes there has to be some kind of process how to verify that the produced environments are the ones you expect them to be or if they have been compromised and the problem is I have used those environments and I first tried for this encryption for the Vagrant boxes but the problem is it's always the same the same key for the encryption so that doesn't work and even as you mentioned you can have a memory dump so you can read out that key so there is no real possibility to set up a Vagrant box that can be tempered with afterwards so there has to be some kind of hash sum to compare those produced results. As soon as you have a reproducible build for example script languages are much easier to achieve because then you can just div the file system directory and see if there was some change. What I would do in this case is to run multiple services and compare the results if that's possible for example you have those reproducible builds then run it on a few servers which are independent and compare what you have. We have two more questions for microphone number one and only a few minutes left microphone number one. So what's your recommendation for handling credentials in application configuration files? We need often some database user and password or something like this in say Spring Boot application, YML or things like that and is there any best practice or any framework which can handle such things or we need to explicitly encrypt these credentials in this application and then decrypt for itself an application but then you need the semantic keys or Yeah so Ansible for example comes with a mechanism that's called AnsibleVault which encrypts that with the passphrase that you can enter a new command line as soon as you touch the file for example if you want to run Ansible then it will ask for password when starting up so if you want to share that password with your developers everybody has access to the same keys I would prefer to give everybody so every person in this team or even every device a different key if that's possible somehow that's what I was trying to mention with the GitHub accounts that you don't use one GitHub account but you use many of them if you, yeah We have one more question for microphone number one and then a question from the internet Yeah my question was more about I mean some of your recommendations are low-hanging fruits but some of them it's like it's just impossible I mean it's not sustainable like it's very hard to maintain and so I'm wondering if you use all of them every day or just part of them or do you just leave like an open VSE developer at the end? It depends on the project so what I try to do on my development system is to have this compartment so that one compromised project would not affect others because I'm not the only person checking and merging the code so and that's something that gets quickly too much for one person to review so I can't review all the code that I'm running currently on my computer that's true but I can try to mitigate what the impact of this will be And the question from the internet What tool would you recommend for dipping a file system? Diff Worked for me so far or what exactly is the question about maybe you want to see if there did the hash change in the file so when you have for example the script file 1 and the script file B and they have a different hash sum that's something I would consider something I would look up manually so as soon as I have an indication that there was something wrong I would look it up manually and use any tool that I have Haxidit or whatever's available We have less than one minute left Any final remarks? Thank you