 This is Cloud ABI talk. Just make sure you're the right ones. Yeah, so we have Alex Wormer speaking to us about the securities on politics. Greetings, everyone. Thank you for coming to this briefing on the inquiry into the Soul III defeat 20 cycles ago. Contents of this briefing are classified duchess royal bloodline. Anybody who does not have that classification must leave the room now. Okay, with the formalities over, we can begin. My name is Alex Wilmer. My mother was Susan Wilmer. She was chief of docking during the Soul III harvest 20 cycles ago. It was her that allowed that fateful ship to dock. Scout ship TLV 3495. The one that we'd presumed destroyed aons ago. This was the ship that was carrying two human cable repair engineers. Those cable repair engineers were carrying their Jolly Roger super weapon. That led to the destruction of the entire fleet and the loss of the Soul III harvest, along with nearly a billion mines. I've been part of the team for the last 15 cycles, investigating the reasons for this defeat. There were many contributing factors. Our synchronization signal impinged on human communication bands. They were able to detect this and from this calculate the time of harvest. This resulted in their human leaders surviving the initial harvest attack. There are numerous smaller incidents, such as trainee GX firing on a human welcome wagon. We've seen such attempts to communicate before. They've of course never been successful, but in this case, critical seconds were lost in confusing the humans and they were able to escape the initial fireball. Another example I would like to highlight, following the initial counterattack by the humans, which was of course futile. Their kinetic weapons, their missiles could not penetrate our energy-based shields. But in one case, a downed ship did lead to the capture of the pilot. The pilot was taken to the human leaderships, where a pilot was tortured, interrogated, mind probed. During this, the pilot did reveal our negotiating position, our harvest tactics, and our general disposition. This resulted in counterattacks by the humans of a thermonuclear nature. Of course, this was a still futile. But they were contributing factors. Finally, there was one more that I'd like to highlight. The captured craft was not challenged, was not questioned when it approached our main harvest ship. This allowed it to gain access to the command carrier. This allowed the humans to gather intel on our initial invasion plans. None of these pale in significance to the principal reason for the Sol III defeat. Capture of the scout ship. From this capture, the humans learned of our existence. They learned of our biology. They learned of our technology. Critically, they learned of our UNIX operating system. From this unit, from our technology, the humans went on to develop various things. Human code words include Roswell, Area 51, UNIX, Bell Labs, ARPANET, AOL, email. All of these are pale imitations to our consensus net, of course. But they gave the humans a critical foothold into our protocols and systems that allowed them to upload a virus to Reclamation Pump 369282. That Replication Pump then communicated on consensus net, spread, sent commands, fleet-wide, resulting in the disabling of all protection fields. From this, the humans were then able to use one of their primitive thermonuclear devices, destroying our carrier ship. Our thoughts, of course, go to all the families of those aboard. So for the past 15 years, we have been carrying out the investigation. There are numerous lessons that have been made in procedure and command decisions. This briefing will concentrate on some of the technological implications. We find that the root cause analysis, Pumpmon running on that pump was vulnerable to the humans' attack. That is how they got their foothold. That is how they were able to instruct all defense fields to switch off. Without that, their attack would have been useless. The problem with Pumpmon was not a simple buffer overflow or stack-smashing attack. The problem was more architectural. Pumpmon had numerous capabilities that it did not need in order to fill the role of monitoring that pump. It could read global files. It could monitor processes. It could create network sockets to other places on ConsensusNet. All of these were unnecessary, and all of these were exploited by the human Trojan. The table you see is a quote from the report. Please refer to that if you need the full details. So the architectural flaws of UNIX boil down to discretionary access control. That is, access control is not enforced by default. There are things that are open that do not need authenticated access. This means that programs on UNIX systems start with excessive capabilities, and once compromised, programs can acquire further capabilities, simply by opening them. There are global resources and global states throughout the UNIX system. This obstructs running programs securely. It obstructs writing testable programs because tests have to try and inject these normally global resources inside a restricted test environment. It obstructs writing reusable programs because these programs assume a full UNIX operating system, and it is very difficult to audit them to say what do they actually use. System administration just does not work at harvest fleet scale. Beyond a million nodes, we just do not know what these systems are doing. Our team would like to propose a human technology that has actually been adapted from their reverse engineered version of our UNIX. This human technology is called Cloud ABI. It is a relatively recent invention for the humans, approximately two years old. Under Cloud ABI, programs start with the ability only to spawn threads and to allocate memory. Unless they are provided with external... Unless they are provided with access to external resources, they cannot access them. They cannot acquire further ability. They cannot acquire further access to external resources. They can only do that through the capabilities provided to them when they are started. The implications of this are it is safe to run an unknown Cloud ABI binary if it is given no resources. The worst thing that it can do is allocate too much memory and burn through CPU. As a result of this, with explicit capabilities passed into the program as a startup, it is much easier to audit these programs to say what they need. As a consequence, it is much easier to test these programs. This leads to better release engineering and could allow for higher level orchestration the ability to migrate processes between hosts rather than virtual machines or containers. This could lead to more efficient resource use in fleets and certainly to more secure resource use. To give you a bit of background on this Cloud ABI technology, it was initially developed by a human in... a human called Ed Schuten. He is located in the European continent. It was initially for the human derivative of our Unix called FreeBSD. It is now available for multiple human operating systems and is compatible with our SenseNet and Cloud A... and original Unix. Some of you may be familiar with a human technology called Capsicum. Cloud ABI is derived from this Capsicum project. In Capsicum, processes initially get access to global resources and can acquire further resources just like any other Unix process. But a Capsicum process can call a function called CapEnter after which syscalls that allow it to acquire further resources are blocked. They return an error and or result in the process being killed. This allows for more secure processes after they have left their initial startup phase. The problem with this Capsicum project is that integrating external library into a Capsicum process causes runtime errors, strange behaviors, highs and bugs, because a library buried deep in the call stack might try to open a file, might try to initialize a pseudo-random number generator from a device and then fail and fall back to a less secure method such as the time of day or the current PID. The innovation that Cloud ABI takes is to make Capsicum default. It is always on. Cloud ABI processes cannot call open. They cannot see global resources such as process tables, file systems or user databases unless explicitly given access. To give you an idea of what we remove, all of these APIs are unavailable to a Cloud ABI process. The first category is simple common sense. These are APIs that were not greatly designed in the first place or they tend to result in buffer overflow bugs. There are thread safe, buffer safe alternatives already available for both Unix and Cloud ABI. The second category is basically the Unix file system. On a Unix operating system, a process can open or attempt to open any file by its path. This is impossible in Cloud ABI. There is no open function. There is no static function. There is no get PID. There is no get UID. Next, we move on to mutual state functions. These are ones that tend to have an effect process-wide regardless of whether a process is multi-threaded. These are removed because they make programs harder to reason about. Removing them simplifies the API and there are thread safe alternatives. Standard in, standard out and standard error are also removed simply because they are a global resource that should be explicitly declared. ARGV is also removed. The method of parsing in arguments to a Cloud ABI process is incompatible with ARGV which relies on acquiring resources based on string values. This is disallowed. After removing these things, we add one simple concept. Unix file descriptors become capability tokens. These are the tokens by which a Cloud ABI process acquires all resources. All APIs in Cloud ABI that allow acquisition of new resources require an existing file descriptor to be passed in. A file descriptor might describe a directory, a file, a socket, or even the handle to control a sub-process. The second thing we add is a single application binary interface. This means that a Cloud ABI process once compiled on any Unix system, native or human, will run on any other Unix system without recompilation. The ABI is available for the following human systems, FreeBSD, Arch Linux, Debian, Ubuntu. It is even available for their Mac OS. The support is in progress on the Linux's, but with the next release of the human's FreeBSD, it will be a native feature. So it's best at this point to illustrate with an example. We'll be taking a very simple, naughty case of a Unix utility, LS. This takes the name or the path of a directory and prints out the names of the files and folders inside it. This is a very simple example. Strip down to illustrate our differences. You will note that the process is taking in a string and assigning it to the variable derpar. It is then passing this string down to the operating system and the operating system is acquiring resources on behalf of the process. If we did not see the source code of this process, we do not know what it would be capable of. It might list the directory, it might list the directory and send those results back to the humans for further analysis. It might encrypt the contents of the directory, it might delete them. It could do any number of things we don't know without fully auditing the source code. Using some of the features of Unix, we can come closer to a cloud ABI design. In this one, the LS program does not take any string input. It receives only file descriptors. File descriptor 0 is the directory that we are trying to show the contents of. File descriptor 1 happens to be standard out. Given this model, if the program was unable to pass strings to the call to open and the call to list dir, we could say that this process was not able to do anything other than act on the resources we provided, namely, read only access to a single directory and everything below, and write only access to a single file stream, namely, standard out. The problem with this model is that it becomes very inflexible to pass in file descriptors in the exact sequence that they will be used by the program. So, the cloud ABI system relies on a new mechanism called arg data. In arg data, there are a set of APIs to gather file descriptors according to a tree structure. Programs can acquire these by key name as lists or file descriptors or maps. In the example you see, we use a helper program called cloud ABI run to map a YAML file containing a description of the input to the program to the file descriptors that the program will receive. In this example, the Python executable is not a Unix executable, it is a cloud ABI executable. Therefore, during the build of this Python executable, any reference to standard in, standard out, standard error, the C-level function open, the C-level function stat, the C-level function opened would have resulted in compile time errors. As a result, we can safely say that this execution of this Python script cannot do anything except read the contents of a single directory and write the output to a single file descriptor. This makes this process safe to execute without trusting its source. We need only know that we have exposed the inputs we provide to that program. The inputs are explicit, not implicit. A further example, it should be mentioned at this point that this example is, at the moment, hypothetical. The Python port to cloud ABI is in progress. It cannot currently do this. Other programs written in, other C programs are fully ported and there is a cloud ABI ports set of packages available. To give you a further example of illustrating what might be possible, we show here a example configuration for a web server. The web server binary itself would not have its own configuration file. It could not read that file unless provided and that file would contain strings referring to paths which the web server would not be able to open. So in this example, we combine arguments and configuration into a single file and this file is provided to the cloud ABI run helper in order to acquire resources on behalf of the web server. Where this web server compromised, it could not start listening on new ports. It could not open a network connection to send the contents of any acquired data out to the world. All it could do is serve network traffic on the socket that we have provided. So at this moment, what is... So at this moment, we ask what can we do in the future with this cloud ABI system. We might imagine a future where software appliances can safely run customer provided plugins or third party plugins without exposing the internals of their system or the entire operating system. These plugins will be provided with a limited set of file descriptors and would therefore be constrained in what they could do to affect the outside world. We might use this to isolate vulnerability, vulnerable systems such as pumpmon or transcoding libraries for security cameras from fleet-wide security systems. By this means we could avoid problems in error-prone libraries such as the human library image magic or the various video encoding libraries that have extremely complex input requirements and as a result tend to have many vulnerabilities found. We might imagine the ability to use cloud ABI in order to implement the human system at Amazon EC2 without the overhead of virtual machines or containers. Similarly, we might imagine the human system Google App Engine with the ability to submit programs written in any language, C, C++, Rust, assembly language. In theory these would be safe languages to implement programs in and allow them to be uploaded to a third party cloud service without virtualization. This would allow us to compose applications, not containers. I shall now show you a brief demo of what has been achieved with the human language Python and the cloud ABI system. Of course it would help if I showed this on the right screen. In order to run a cloud ABI program on this system we can use the cloud ABI run helper. The Python binary you see has been compiled against the cloud ABI system headers and version of libc. The Python binary itself cannot accept standard input or write to standard output. So the file that we are providing is going to the cloud ABI run program which is a Unix program. It is opening resources on behalf of the Python binary. The Python binary is then receiving file descriptors. The contents of that YAML file look like this. At the moment the Python binary is a work in progress. This is the first thing that got working with it. We have transliterated the native Unix Python arguments into YAML keys and the command is given verbatim. We are also able to execute system calls. In this case the Python script, the Python binary, are burning CPU and then printing the result out to standard error. As a result of cloud ABI though this is the worst that this process can do. We can kill it and know that it has done no damage to the system as a whole because it did not have access in order to do that damage. If we have a look at the contents of resource.yaml we see that all it had access to was read-only access in order to import its standard library, write-only access to the standard error file descriptor and the ability to execute a simple syscall. Work will continue on the Python port to the cloud ABI system. There will be a sprint running at the human event EuroPython 2016 on Sunday. If you would like more information please visit these addresses on the human network. Our usual tabs are in... Our usual network tabs on their networks are in force. Thank you very much. So we have some time for questions. Let's start with you then. Hi, I'm Guerny. Thanks a lot for the talk. Hail the Queen. Okay, awesome. So I'm wondering, you know, we have in the community like a lot of tools that attack the same problem. We have app armor, we have things like OpenBSD and SA Linux from NSA, no backdoors, I promise. So why another system? So the problem that we have found in our experience with app armor, SA Linux, such systems, is that the incentives with them tend to be wrong. It is not the creator of a piece of software that configures those systems. It is typically the distributors and the system administrators. So as a result, the configuration of the protection system, such as app armor or SA Linux, tends not to be in sync with the requirements of the programs that are running. So all too often, administrators in the midst of battle on a fleet ship will typically just turn them off. More so within experienced administrators, but even seasoned veterans of multiple harvest campaigns have been known to switch these systems off when there is incoming fire. Yes, okay, another question if I may? Okay. So is this thing production ready? What's the overhead and what is the biggest app you are currently running with it? The system is still in its early phases. It was conceived approximately two Earth orbits ago, around two and a half cycles, until the creator has been working on it quite a while and is an experienced developer as humans go. The Python part of this is most certainly not production ready. It would be tricky even to call it alpha. Unfortunately, the human responsible for its development, some inconsiderate human gave them a job. So there was not time to complete it before this briefing. Hi, thanks for the talk. I'm curious about the support for binary libraries like BCBOX. Are they planned to be art to the Cloud ABI support? And we have seen a pattern common to follow by humans. And they tend to use Linux a lot and avoid free BSD. They security focus tools. And we are also seeing a proliferation of a new tool called Docker, who is taking a different approach to security. It's the acceptance of Docker threatening the future of Cloud ABI. Are we investing time in Cloud ABI when we are going to face a different problem in the next harvest? So I'm pleased to report that the next harvest fleet is on its way to Earth and they will pay for their treachery. The human technology Docker provides similar benefits to Cloud ABI. It has slightly higher overhead and is restricted only to their Linux operating system. The Cloud ABI for Linux support is 90% complete. It lacks integration with their distributions at the moment. We are working to improve this. What was the other part of your question, please? Oh, so there is a repository available of human derived software called Cloud ABI ports. Over 100 packages in this. I do not believe that Busybox is one of them. The Cloud ABI model is better suited to long running daemon processes than to interactive use. It can become quite unnatural. It can be cumbersome in order to provide all the file descriptors to Cloud ABI binaries in interactive use in a shell. So that is possibly a development but if you wish to see if a package has been ported I recommend visiting the Cloud ABI ports link that was included in your notes. Any other questions? All right, thank you very much.