 Good morning everyone. I'm excited and honored to be speaking here at Rubyhack as you might be able to tell I'm losing my voice but I'm going to try to make it all the way through so please bear with me. So today I'll be discussing Ruby web application security focusing on defense and depth security controls that make it more difficult for attackers to exploit vulnerabilities and to limit the damage of vulnerabilities that are successfully exploited. My name is Jeremy Evans and I maintain numerous Ruby libraries, the most popular of which is SQL, the database toolkit for Ruby. I also maintain a web toolkit named Rota and an authentication framework named Rota that builds on top of SQL and Rota. I work for the government department in Sacramento and one of my responsibilities is operating as the department's information security officer. So I'm responsible for all of the information security in the department including the security of our web applications which I'll use SQL, Rota and Rota. Now Ruby web application security is a fairly wide subject area and you want to prevent cross site scripting usually by escaping outputs in templates. You want to prevent cross site request forgery usually by using form specific or session specific tokens. You want to set an appropriate content security policy header for detailed control over where the browser can load content from and you also want to prevent SQL injection either by escaping input or by using prepared statements and bound variables and all of those are important but that's not what the focus of this presentation is. So this presentation focuses on defense in depth approaches for web application security and what does a defense in depth approach mean? Well it means instead of just focusing on prevention you also focus on hardening, mitigation and containment. So no matter how hard you try to prevent vulnerabilities you should assume that your application will still be vulnerable somewhere. Even if you use secure coding practices in your application development you are all relying on libraries and those libraries may also contain vulnerabilities. From a security perspective it makes sense to assume that attackers will attempt to exploit those vulnerabilities. So you need to have a strategy in place that makes general classes of vulnerabilities more difficult to exploit and that limits the damage of vulnerabilities that are successfully exploited. Now some people talk about information security as though it were binary and that you're either secure or you're insecure. I think it's better to think of information security similar to physical security. So consider the safe long used to secure physical valuables. So safe manufacturers don't claim that their safes are secure and their competitors' safes are insecure. Instead safes have a rating system. So a safe with a TL15 rating can withstand 15 minutes of attack by a qualified safecracker with good tools on a single wall of the safe. And a safe with a TRTL 36 rating can withstand a 30 minutes of attack by a qualified safecracker with highly sophisticated tools as well as torches simultaneously on all six walls of the safe. So the basic idea with the security provided by the safe is that you assume the safe will be cracked given enough time. So companies that sell safes will tell you that when you buy a safe you are buying time. So alongside the installation of the safe is installation of monitoring equipment so that attempts to crack the safe will alert the authorities with the expectation that the safe crackers will not have adequate time to crack the safe and get away before the authorities arrive. So applying this idea to information security you should assume that vulnerabilities in your application will be exploited given enough time unless you intervene. So you want to make it sufficiently difficult to exploit these vulnerabilities and have sufficient monitoring tools in place to inform you of exploit attempts such that after receiving notification of a possible exploit attempt you can try to figure out and fix the underlying vulnerability before the vulnerability is successfully exploited. And if that's not possible you will at least want to limit the possible damage of successfully exploited vulnerabilities. Note that increasing exploit difficulty and limiting exploit damage adds significant operational constraints. This presentation is not a secure programming to-do list. It will assess some security controls as well as the costs and benefits of each of the security controls. Whether the benefits of the security control exceed the costs is highly dependent on the application, the data being stored and how important security actually is relative to performance, maintainability and flexibility. So the first security control I'm going to discuss is using separate database users. So in most Ruby applications the entire application is served by a single database user usually the owner of the database which has full access to read and modify any part of the database. And with this approach an SQL injection vulnerability in any part of the application can result in all data in the database being at risk. So assume you have an admin application that issues by employees and a separate application that is publicly accessible. There's often a very large difference in terms of the database access needed by the public application and the database access needed by the admin application. So in cases like that using a separate database user for the public application can greatly reduce your exposure. So if an attacker can exploit an SQL injection and the public application they can only get access to the data that is explicitly granted to the public application database user. If you're using a microservice approach with a shared database, having each microservice use a separate database user can also significantly increase security. So if there's an SQL injection in any microservice it only grants access to the data that is needed by that microservice as opposed to all data in the database. Now if you're developing a majestic monolith it's a bit more complex to use separate database users but it's still possible. So the easiest way to implement separate database users in a monolith is to use a database library that supports sharding. So sharding is often used to connect to databases on different hosts but it works just as well to connect to the same database using different user credentials. So SQL has had support for sharding for many years and makes it easy to use a separate database user for all queries in a given block of code. And this allows you to get most of the database security benefits that you get with using separate database users while still retaining the simplicity of developing a monolith. Now ActiveRecord doesn't support sharding by default but there are various extensions that you can use to add sharding support. And last month at RailsConf they announced that ActiveRecord 6 will support multiple databases but I'm not sure if sharding is on their agenda. So one way to benefit from using multiple database users without having to manage separate connections per user is to use security definer database functions. So by default when you execute a database function it is configured so the function runs using the access permissions of the user who executes the function. And this is similar to the normal operating system model where if I run a program it runs using my access permissions. So UNIX offers the ability to create a program that runs using the access permissions of the user who owns the program instead of the user who runs the program. And these programs are called setUID programs. So the ping program is usually a setUID program. If I execute ping as a normal user ping runs as root at least it can open the raw socket that it needs to send packets. So security definer database functions are the database equivalent of setUID programs and that when a user executes the database function the function runs using the access permissions of the user who defined the function instead of the user who is executing the function. So this is useful if you want to give specific database users a specific type of access to a given table without giving them full access to the table. So in the RODOT authentication framework the recommended configuration is to use two separate database users. The database user that the application runs as does not have access to read the password hash table. Instead the other database user owns the password hash table and creates a security definer database function that can be used to check password hashes. So if there is an SQL injection vulnerability the attacker cannot export the password hashes to perform an offline attack. They would be limited to using the database function to check password hashes which is going to be multiple orders of magnitude slower. So now I have discussed the benefits of using multiple database users. Let's talk about the costs. So the first issue is that database migrations become more complex. So instead of just worrying about what tables or columns you want to add to the database you need to consider what access each of the tables and columns is needed for each of the database users you have and any of the applications and that requires significant upfront analysis and complex applications can be quite difficult. To make sure your application runs correctly when using multiple database users you need to make sure that your tests correctly use the appropriate database user when running. That's not usually difficult but it becomes problematic when the database user does not have the appropriate access to set up the initial database states for each of the tests. So one possible approach to this is setting up the initial state as one database user and then running the tests as a different database user and that's the easiest approach to implement but it does not allow for transactional testing since the two database users cannot see the uncommitted changes made by the other user. So I prefer to use transactional testing so I came up with a different approach which initially felt like app but it does work pretty well in practice. So you create a security definer database function in your only your test database and that allows other database users to run SQL code as the database owner. So when performing a database setup for the tests or the database user does not have the appropriate access you call this database function with a string of SQL code and the function has the appropriate access and the changes are visible to the current transaction so transactional testing still works. So as long as you never call it database function in your application and you only call it in your tests you get the performance and simplicity benefits of transactional testing and you get the reliability benefits of using the same database rights as production. So the next database or text defense in-depth security control I want to discuss is dropping privileges which is often referred to as priv drop. So the way most Ruby applications are deployed a regular operating system account runs the Ruby web server and this account must have at least read access to all parts of the application in order to load it. An alternative approach starts the processes route so after loading the application before accepting connections the application process switches from route to an operating system account with reduced privileges and as the application has already been loaded it no longer needs access to the files that are necessary to load the application and you can actually make such files unreadable to the user account that the application runs as. So many Ruby applications read template files like ERB and Hamel on demand but outside of the templates most of the application code is loaded at startup and does not need to be accessed after the application has been loaded. So I'm assuming attacker has exploited a vulnerability in your application that allows them to read arbitrary files on the file system and display the output. They can use that vulnerability to read all of your applications code looking for potentially worse vulnerabilities such as SQL injection or remote code execution. So removing the ability to read the application's files can increase the time required to find additional vulnerabilities. Now choosing to start the application as route and then drop privileges has a security trade-off. It makes you more secure when the application is running but it makes you less secure when the application is loading. So if developer accidentally commits code that would fail for a regular user but would succeed for the super user that code now succeeds. Now consider this command and assume that the temp dear environment variable is not defined. As a regular user this fails and does no damage. As the super user you now get a chance to test your backups. So using privdrop requires a high level of trust in the application developers and all code that your application loads during startup such as all gems in use. And while you generally have to trust all the gems you're using anyways in a privdrop scenario the penalty for a mistake can be much worse. So in terms of support in Ruby web servers both the unicorn and passenger support privdrop but it's not currently supported by Puma. Now privdrop by itself does not improve security greatly but if you're starting your process as route you have an additional security option available to you called chroot. So chroot allows you to change the root directory of the process and it's something only the super user can run. So if you're going to use chroot you should definitely privdrop after doing the chroot because you don't want your application running as root and additionally there are ways to escape the chroot if the privileges are not dropped. So when you run chroot you pass it a directory that is under the current root directory and that directory now becomes the root directory from the perspective of the process. So before chrooting listing entries in the root directory shows the real root directory and after chrooting to the application directory listing entries in the root directory shows the contents of the application directory. So after you chroot you can no longer access any files that are outside of the chroot. So again an assuming attacker has found a vulnerability that allows them to read arbitrary files. Without a chroot an attacker can read any file on the entire file system as long as the user's access permissions allow it. Within a chroot the attacker is limited and can only read files if they're under the application's directory and that significantly limits the severity of the vulnerability. Now assume an attacker finds a vulnerability that allows them to execute an arbitrary program on the server. So without a chroot this could potentially execute any program at all on the server such as a shell which would make it easy for them to attempt to compromise the system further. So within a chroot they would generally not be able to execute any programs at all because you usually do not have any executable programs under the application's directory. So chroot can prevent the exploitation of some vulnerabilities such as arbitrary file execution and it significantly limits the effect of exploiting some other vulnerabilities such as arbitrary file reading and file writing. chroot makes the exploitation of most other vulnerabilities more difficult. So after locating a vulnerability one of the first things the attacker is going to want to achieve is a remote shell which lets them easily execute arbitrary commands on the server. So within a chroot a chaining remote shell becomes much more difficult even if it does not eliminate the attacker's access completely it will slow down the exploitation of the vulnerability which gives you more time to fix problem. While chroot offers significant security benefits the restrictions it imposes make it challenging to implement. So most Ruby applications were not designed to be run when chrooted and many Ruby applications will run programs that are outside of the application's directory and the only way for that to work after chrooting is to move the files into the chroot and if the file or the program is dynamically linked instead of statically linked you also need to move all shared libraries into the chroot. Now anytime you are moving programs into the chroot you are potentially reducing your security because you're offering an attacker additional to attack surface. So you should be very restrictive about any programs you do add to the chroot and try to avoid adding programs if possible. Now if your application references any absolute paths at one time you should probably change those to relative paths and this is because the absolute path when not chrooted such as when loading the application will not work after the application has been chrooted. Now one way to work around that is to create a directory tree that is under the application directory but contains a sim link back to the root directory and that allows the same absolute path to work in both regular and chrooted modes. Now the biggest issue with using chroot is that any code that requires files at one time it will only work if the required files have already been loaded or the required files are under the applications directory. Now there are quite a few gems that will require files at one time usually because the required file is only needed in certain methods so instead of always loading file you only the methods that use it will require it. So runtime requires can be worked around by just requiring the file before chrooting and since the file is already required future attempts to require it will do nothing. Unfortunately there is an even more insidious problem that affects chroot and that is the use of autoload. So autoload is basically a hidden runtime require but instead of calling a method to require a file you just need to reference a constant. So outside of a chroot autoload can appear to be a nice to have. The constant is never referenced the related file is never loaded and that can save memory. Inside of chroot autoload is a time bomb. So if you require a file or require a library that uses autoload for any of its constants it can appear to work fine even when you are chrooted. However when a code path is taken at runtime that references the constant Ruby tries to require the related file and since the file does not exist inside the chroot it blows up. So the most prominent gem that uses autoload is rack itself. Rack uses autoload for all of its internal classes and modules. In many cases these classes and modules will be loaded at application startup which would be before chroot. However there are a lot of constants that are only referenced at runtime when handling requests. So if you chroot and do not reference the appropriate constants before chrooting most requests will work fine. Then some user tries to upload a file and that references rack multipart which causes Ruby to require the related file which blows up because the related file is not under the chroot. So another common gem that uses autoload is the mail gem and while it does not use autoload for all of its internal classes and modules it does use autoload quite a bit. Now in the mail gem's defense they do offer an eager autoload method that will require all autoloaded paths so that referencing the constants at runtime will not break things. So I mentioned that runtime requires become problematic when using chroot and that makes development environments that reload code a bit of a challenge to implement. So if you want to chroot in production you're going to want to chroot in development to make it easier to find possible issues that are caused by chrooting. So development mode code reloading usually works by looking for modifications in certain files that have already been required. So if they detect a change in the required file they remove the related entry from the loaded features away and then they remove the related constants from whatever classes or modules they were defined in and then they require the file again. So I mentioned earlier that absolute problems and entries and loaded features are stored as absolute paths. So when chrooting all existing entries in loaded features that are inside the application's directory need to be modified to strip the application directory from the start of the path and that's because after chrooting the absolute path to the file in the application directory no longer has the application directory at the start of the path because the application directory is now the root directory. So for entries and loaded features that are outside of the application's directory you would not be able to reload those even if they change so there's not much you can do. Now most reloaders will only reload application code they don't look for changes in all required files so you usually don't care about reloading files that are outside of the chroot. So unless the development reloader has specifically been designed to support chroot it will probably not work correctly when chrooted and the only reloader I know that supports chrooting is rack un-reloader which is a gem I maintain and added chroot support to. So in addition to running your development environment chrooted in order to make sure things run correctly when chrooted in production your automated test suite should also be able to run while chrooted. So in the test environment you want to chroot after loading all the entire application but before running any specs. Now with our spec I think you can probably chroot in the before suite hook but I don't really have personal experience doing that. With minitest you would chroot directly before calling minitest.run and if you're using minitest auto run you would generally chroot in an exit block. So in some cases it can be helpful to run the tests without using chroot and still catch runtime require issues that would be caught when chrooting and this is useful if you want to have the ability to run tests without being root. So in terms of catching runtime requires a good way to do that is to freeze loaded features and then any attempt to require a file at runtime either via the require method or via an auto loaded constant will raise an exception because it would not be able to modify loaded features. Now the only Ruby Webster where that currently supports chroot is Unicorn. So the next security control I'd like to discuss is the use of a firewall. So a firewall is a network device that filters traffic or restricts traffic based on a policy. So firewall policies should generally use a whitelist approach where all traffic is denied by default and only traffic explicitly allowed is allowed to pass through the firewall. Now most operating systems have a firewall that is built in these days and in some cases those firewalls aren't able to by default. Now there are a couple of types of filtering you can do in a firewall. The first type is called ingress filtering which filters traffic coming into the server and the second type is called egress filtering which filters traffic leaving the server. So for a web application server you want the firewall to do ingress filtering usually allowing only HTTP and HTTPS traffic and from all IP addresses if the application is publicly accessible. It's best to only allow SSH traffic from specific IP address ranges and you don't want to allow other ports if you specifically wanted them to be accessible. So by doing ingress filtering and restricting traffic to only specifically allowed ports you protect yourself against accidentally making other services available that may be running on the same server and this is especially important if you're running any services on the same server that do not use authentication by default such as Redis or MongoDB. It's also a good idea to use egress filtering on the firewall so that only specific traffic is allowed outbound. So we'd probably want to allow access from the web server to the database server and if your application uses any third-party APIs you probably want to allow access to those through the egress filter. Now ideally you would set the egress filter to only allow those connections to specific IP address ranges but in some cases that's not possible. So by having an egress filter you limit the ability for your server to be you to be compromised and use it to attack other servers and you also make it more difficult to exfiltrate information out of your server. Now most server level firewalls will support egress and ingress filtering on a per user basis so if you are using privdrop you can combine that with user specific firewall rules. This can make it possible to allow root to make new connections to the database but not allow the application user that ability. So with that approach even if the application gets compromised it cannot create new database connections it can only use the existing database connections and if you're using separate database users this makes sure that a compromised application cannot make a new connection to the database using different credentials that are used at application startup. Know that this approach requires you to preallocate all your connections at startup and if the database connection drops for any reason you need to kill the application worker process so that a new worker process can be created which will generate a new database connection at startup. So next to the security control I'd like to discuss is referred to in the open bsd community as fork plus exec and it's a way to offer protection against address space discovery attacks. So when using fork the child process inherits the memory layout of the parent process so if you're using a forking ruby web server such as unicorn or puma in cluster mode the web server uses fork to create the worker processes which will all start with the same memory layout as the parent process. Additionally most users enable the feature to preload the application before forking and this speeds up the creation of the worker processes since they don't have to load the application again. Preloading can also reduce memory usage significantly since the memory pages of the worker process and the parent process can be shared until one of the processes writes to the memory page after forking. Unfortunately because the worker process as all of them and the parent process share the same memory layout the application becomes more vulnerable and this is due to the fact that in many cases to actually exploit a vulnerability in the application requires knowledge of the applications and memory layout. So an attacker can attempt to determine the memory layout by submitting requests and if they guess the memory layout correctly their exploit succeeds and if they guess the memory layout incorrectly the application work process will usually crash. So unfortunately with a pure forking web server after the worker process crashes the parent process will generate a new worker process with a very similar memory layout and in that case the attacker just needs to keep trying their exploit by submitting slightly different requests each time because all the worker processes have roughly the same memory layout every time they attempt to exploit the vulnerability and they fail they get more information about the possible memory layout of the process. Eventually they can determine enough of the memory layout to get a successful exploit to work and this is commonly needed in order to mount AEO it's called a blind return oriented programming attack. So with a fork plus exec approach the parent process does not preload the application before forking after the parent process forks a worker process the worker process calls exec using the same program. So if the parent process is unicorn after forking the worker process also execs unicorn but within an environment variable set to tell unicorn that it is operating as a worker process. So when you use exec an entirely new memory layout is generated for the program but the process still inherits the file descriptors of the parent process including the web servers listening socket. So after the worker process calls exec it then loads the application and after that it is ready to start handling requests using the listening socket it inherits from the parent program. So since the worker process calls exec it has an entirely different memory layout than the parent process and all the other worker processes. So if the worker process crashes and the parent process spawns a new worker process to replace that worker process will also have a completely different memory layout making addressed space discovery attacks in practical. Currently fork plus exec is only supported on unicorn using the worker exec configuration option. Now if you're not currently preloading your application there is a slight memory cost to enabling fork plus exec probably about 10 megabytes for worker process but beyond that there's no major issues. If you are currently preloading your application you would need to stop doing that to use fork plus exec and that can significantly increase the memory usage of your application since the worker processes cannot share any memory with the parent process. So the decision to enable fork plus exec is really a trade-off between performance and security so enabling fork plus exec can improve security but it will definitely use more memory and memory is the limiting factor in some execution environments. So the next defense in depth security control I'd like to discuss is the use of system call filtering. So by default on most operating systems any process can issue any system call and the kernel will handle access control for the system call based on the user who issued the system call and that's nice and convenient especially if you are an attacker. So once an attacker finds a vulnerability in a program and is able to successfully exploit it in many cases they will try to increase their access privileges from the user running the process to root and this is referred to as privilege escalation. So the typical way that privilege escalation is attempted is to find what is called a local root vulnerability. So a local root vulnerability assumes the attacker already has the ability to execute arbitrary code on the system as a regular user and if the vulnerability is exploited it allows running arbitrary code as root. So the two most common ways to get local root are to either find a vulnerability in a set UID root program and exploit that or by exploiting a vulnerability in one of the kernel's system calls. So if you're using a ch root to limit file system access in general there should be no set UID root programs that are accessible to the process which leaves the system call approach as the only way for the process for the attacker to escalate their privileges. So most common operating systems support over 200 system calls and each one of these system calls could potentially have a vulnerability that could be used to escalate privileges. So by restricting the system calls available to the process and restricting the arguments to those system calls you can eliminate potential vulnerabilities the attacker could exploit and this is referred to as reducing the attack surface. So in addition to reducing the attack surface one other advantage of system call filtering is that few attackers expect it. So many attackers will use system calls that would work if the system calls were not filtered and if the process crashes as soon as one of those system calls is used that can provide an early warning that an attacker has found a vulnerability in your application and that gives you more time to fix the vulnerability. System call filtering in general also makes it more difficult to exploit many different types of vulnerabilities. Now one example of system call filtering is filtering the open system call which is used for opening files. So if your application only needs to open files for reading at runtime and should not be opening files for writing at runtime you can filter arguments to the open system call such that attempts to open the files for reading will succeed but attempts to open files for writing will fail. Ideally you would limit the system calls and the arguments allowed for the application to the minimum that the application needs to function that can be quite difficult in practice unless you have a good idea of every system call that your application is expected to use. Even simple approaches not like not allowing the opening of files for writing can be problematic at multiple levels of the stack. So by default Unicorn will open new files for writing for all large requests. You can work around this by using the client body buffer size configuration option and setting it a value higher than the client body size limit of the reverse proxy and this makes Unicorn buffer all uploaded files in memory instead of creating temporary files. By default Unicorn will open new files for writing for each file uploaded even if the files are very small. Now you can set the rack multipart tempfile factory setting to a custom proc to try to work around this but most application code expects that the object returned by the factory have a path that is visible on the operating system. Now system call filtering can also cause issues when testing since some testing libraries you should use additional system calls that are not used by the application itself. For example if you don't expect your application to execute other programs you can filter the exec VE system call and if your application does not execute programs and none of the libraries that the application uses execute programs you may be okay but maybe you use mini test to test this application and if your tests run fine you have no issues. However if one of the tests fails because a string value does not match the expected string value mini test will try to execute a diff program in order to get a more informative test results displayed and this fails if the exec VE system call is filtered. So you can actually set mini test assertions diff equals false so that mini test still works when you're filtering the exec VE system call. Now system call filtering is operating system specific so you have to read your operating system's documentation for how to enable it. Most of my experience comes from using OpenBSD which has a system call filtering implementation called Pledge that's very easy to use but the system filtering call filtering is a little bit coarse. I wrote an easy to use Ruby wrapper for Pledge. Most people are probably running their applications on Linux which offers system call filtering using Secacomp. Now Secacomp is very flexible since you can decide the behavior for every system call and arguments to the system calls. Secacomp is also a lot more complex. It may require detailed knowledge of the Linux kernel, glibc and all the system calls that your application uses. I know that the system calls the application uses can depend on kernel or glibc upgrades. They can change during the upgrade process which can cause additional maintenance issues. So proper system call filtering can significantly reduce the risk of exploitation as well as reduce the risk of privilege escalation after exploitation. Now on OpenBSD using Pledge it's fairly easy to do and I would recommend it for most applications. For other operating systems implementing system call filtering in a manner that the application still works and is maintainable probably has high upfront and ongoing costs. So whether to use it definitely depends on how important security is relative to maintainability. Now I originally planned to discuss memory protections such as detecting stack overflows, heap overflows, use after freeze and double freeze and unfortunately I don't have enough time to discuss all of those memory protections but I do encourage you to look into implementing them if they're not already the default behavior in your operating system. Now in most cases to implement these memory protections you just need to use additional flags when compiling or link against a malloc implementation that is designed for security instead of for performance. So that ends the defense in depth security controls I wanted to discuss. As I mentioned all of these security controls have trade-offs and the decision whether to implement them will depend highly on the application and your organization's tolerance for risk. However I think I'd be remiss if I didn't at least offer some basic guidance. First consider what the data you have is really worth protecting. How sensitive or confidential is the information that you deal with if you're dealing with anonymized data or the manipulation of public data sets. Maybe the data does not warrant the use of these security controls. Consider who can access your application if you're designing an application for internal employees that will not be accessible from the internet and the risk of attack is lower and maybe in that case your effort is better spent on improving the user experience. Consider how much control you have over your application's environment. If you are running on your own hardware or virtual machines you may be able to use all the security controls I've discussed. If you're using a platform as a service provider you're going to be limited to what the provider supports. Now if your application is accessible from the internet and contains sensitive or confidential data my recommendation would be to first look at using multiple database users and reduce the database permissions in order to limit the possible risk of exploited SQL injection vulnerabilities. If you have the ability to configure firewall rules I would recommend doing so. The initial implementation is fairly easy and the ongoing maintenance costs are pretty low. If you have memory despair and the increased memory usage is not a problem then you can consider implementing 4++ exec. Now if your application or the server it is running on has any special access to any other servers or applications that are not accessible from the internet or security is a high priority compared to the ease of maintenance and you have your own hardware or you're running on your own virtual machines then you can consider using privedrop chroot and or system call filtering. Now it is possible to use all of these security controls at the same time. So for the last year all the web applications that I maintain at work have used firewalls 4++ exec privedrop chroot and system call filtering and all applications that are accessible from the internet also use separate database users. So it took significant work to set this up especially since I had to add support for 4++ exec and chroot to unicorn and add support for chroot to rack unloader. Now thankfully I can say that the ongoing maintenance costs have been pretty low. In the first four months after implementing these security controls we had three minor production issues that were caused by one of those security controls and in all three cases we found that the issues were caused because the production environment did not completely match the test and development environments. Now thankfully in the last eight months we haven't had any production issues related to those security controls. So my hope for this presentation is that it encourages you to try implementing some of these security controls and I also hope that if you choose to implement these security controls some of the techniques I've described here will help you avoid the production issues that we experienced and that concludes my presentation. I want to thank all of you for listening to me. If you have any questions I'll try to answer them now. Any early warnings on the system call filtering? Oh early warnings. Yeah probably not. Most of the times that we do run into things failing it's usually because we didn't account for something that should be allowed not for something actually attacking us but every time we do get a crash so what we do is if anything violates the system call filtering for example and the process crashes we get notified about it and then we look at it and we can see okay this is looking at vulnerability or something that should be allowed and so far everything is something we should be allowing so we haven't actually caught anything yet but we don't actually get that much traffic in our publicly facing applications so it hasn't been a big issue for us. Will these slides be available at the conference? Yes yes I'll try to put them up probably you know whenever I get back to my desk. Sorry I very much apologize but my voice has been getting worse every day and I've been like fearing it so thankfully I'm representing today because tomorrow I might not be able to speak at all. Any other questions? Are there any special considerations that you can think of for running an environment like AWS where you also have security groups and ATCs? Yeah so the thing about using like Amazon Virtual Private Cloud like normal things you can do I think egress filtering is not allowed in general on Amazon but if you're using virtual private cloud I believe you have the ability to use both ingress and egress firewalling rules. I don't I want to say Amazon most of the time you're using virtual machines if you're using containers those are a little bit different containers give you I think on what part of Amazon you're using they give you some of the security features like a container is sort of like a ch route but it's a ch route with a bunch of programs stuffed in it so there's a lot more attack surface than actually loading the program and then ch routing where there's no accessible programs available so it's it's better than nothing but it's not nearly as good as the post exec ch route that I'm talking about here. All right thank you thank you very much