 My name is Raul Wesson. I am a member of the previous Dean's Quoteen and president of the previous foundation. I also am at the University of Cambridge where I'm currently working on PhD in computer security. Before I came to Cambridge, I did research at a series of companies, finding the trusted information systems, and ending up with SPLOTA. We did work with the U.S. National Security Agency, the African leader of various others doing computer security work. And much of what I'm going to talk to you about today is an outcome of about six years of that work. I think it's been gradually going through the research process and the great kinds of problems and stuff. So basically where I come from, I thought about coming in and saying hello, I'm Sam Loughlin. What I'm going to introduce you to some of the most security features that are present in previous year five, six, and seven. Some of these features came in around the late previous year series, but most of them became production features in previous years since. There were one or two things I'll describe to you that appear in previous year seven for the first time. In order to understand these features, we also need to understand a little bit about the unique security model itself. So I'll spend a couple of minutes at the beginning talking about what the unique security model actually is. Most of the things that I'll tell you about are extensions to it, but they're fairly natural extensions. They address well understood weaknesses in the unique security model to be it. The level of granularity and flexibility, the degree of an accuracy or detail in logging security properties of logging systems and so on. So I think once we have an understanding of the unique security model, the position that these extensions are, all these extensions play with the kind of info here. The unique security model comes out of Unix's labs and Bell Labs and so on. And it's basically unchanged, I guess, 30 years later. Unix is a multi-user system. You have many different users running around with the system. Users, in the context of Unix, might refer that to a specific end user who's authenticated when they use the computer. They're much of a system services that will run in isolation from each other and have different security requirements. But fundamentally, this is the way that Unix works and it's worked for a very long time. And if you store your macOS 10 box today, you'll find that underneath your Unix users or if you use Linux even, it's just a desktop system and you can find all these Unix bits. A lot of work has gone on in the security, both security research and product development since this multi-user process-based security model was developed. And through this, you come through a lot of security features that weren't present in Unix. And they sort of begin with secure levels, which is worth learning, but go into a lot of other things. And some of these things you will find in other Unix systems are something you won't. What I'm going to focus on today are work that came out of a trusted VSE project, which was one of these research projects funded by DARPA and other agencies and companies. So I'm going to talk specifically about access control lists, security event auditing, and some of the mandatory access control features. And I won't have you spend a lot of time on any individual feature, but I just want to give you a survey of these features and also tell you a little bit about the history of the features and then some of the things you should know even if you think about deploying them. And I think they give greater insight into the Unix model that you understand it. So things I won't talk about. I won't talk about a lot of the work that's taking place in topography, which of course has been cryptographic protocols and so on being one of the big areas of the new work in operating system security. If you look at operating system security in the 1970s, really nothing has changed in the systems that we share today. Maybe they come in locally features, maybe they're on your desktop, instead of in a background taking up two or three rooms. But there actually hasn't been a lot of new operating system security work except in the areas of distributed systems. And I won't really focus on distributed systems in this talk. The Trusted Basic project was founded in April 2000. The goal of the project was to provide what are called trusted operating system features on the previous operating system platform. This can mean a lot of different things to different people. Traditionally in trusted operating systems there are two focuses of work. One is on providing features that are not otherwise present in the operating system. These are things like security event auditing, mandatory access control, and the other is something called assurance, which is the process in which you argue that the system actually does have the security properties you want. The ones that are implemented by security features or just part of the structure of the operating system. In about 2001, we received darker funding for the project and since then there have been a series of sponsors ranging from the US Navy to Alameda, which has also sponsored significant parts of the work. And the work continues today. Some of the work was very much research work. Some of our work on extensible kernel security, how you changed the security policy in the operating system was definitely research. Other areas really revolved around bringing features that already existed in other operating systems to an open source operating system. Four or five years ago, if you wanted security auditing, event auditing in an operating system, you had to buy a commercial product. You could go to Solaris, you could go to Windows NT, but you certainly couldn't go to an open source operating system to get its feature, which was a real problem for deploying open source operating systems in venting environments and military environments and so on. So we wanted to try and provide those features. Most of the features implemented as part of the Rusted BSD project are now considered to be shared. They are ready for use in production environments. There are some that are still getting there and some that are still on the road now. I also want to help that these features were present in previous D, but many of them are also present in Macro's panel. Will be present in Macro's plan. I love the upcoming release of Lovewood, which includes the manager access report by Apple. So the audit work was actually developed on the contrary to Apple and is now being merged in the previous D and you go to Apple, we pick up a lot of the changes we've made in previous D and merge them back in the Macro's planning for the next release as well. So we'll talk a little bit about the previous D security architecture. And as I said, this is very much a unique security architecture. It revolves around a monolithic kernel, a single piece of software running in a single address space that is basically the trusted bit of the operating system. This is the bit that you place more confidence in. If this is compromised, then you are screwed. On top of the kernel, run a series of processes. These are isolated virtual machines essentially. They run pretty much in isolation from each other. They must communicate with each other through the kernel. And so the kernel plays a role in mediating access between them, the inter-process communication, access to shared objects like the file system or resources like the network stack. The process model developed in Unix is now found in pretty much every other operating system. Windows, all the Unix derivatives. It's a very simple model. It's a very effective model for isolating pieces of software from each other. And all the Unix security really rests on the foundation of isolation that's present in the process model. It says that when something is running in one process, unless it explicitly acts through the kernel, it's not going to be communicating with any other process. It won't interfere with them. It won't be subject to their wins. Accidental rates to memory won't turn up in one process from the other. Unless some explicit action has been taken to make that possible. Unix can also provide something called a credential. Processes each have a credential. And this is a series of security tags that tell you about the process from the perspective of security. And in the classic Unix model, these are user IDs and group IDs. These are things that allow you to subset, so to break down a process into groups to have similar security properties. And everything else sort of builds up all of that as well. In classic Unix, there's a privileged root user who's able to do anything. And he's used the system management purposes. Some unique systems have managed to break away from this. Systems like Mac OS X still have the notion of a privileged root user even if that user is never able to log in. There are still processes running as a routine order to gain privilege. And I should be specific about what I mean by privilege. In this context, privilege means the right to violate constraints of the security policy. So the ability to violate protections between users, to change the ownership of files, to modify the file system in the end business, do things that would violate the security properties that the overall policy depends on. So that's what we mean by privilege. Most of the new security features I'm going to talk about are natural extensions to this. They don't fundamentally change the model. They won't mention it in various ways. They do think that we're missing the original picture. Briefly about access control. So I said that privilege is the right to violate access control policies. Access control policies come in a variety of flavors. I'll talk about them more with a high level. Access control policies may be fixed. They may be part of the governance of design. They may be controlled using user-defineable policies in the administration or another one. And we'll refine that concept a little bit. So here's the picture. The kernel is this omnipresent piece of software that's mediating accesses and all these processes run and isolate from each other. And exactly the nature of the isolation isn't too relevant. But it includes address-based protection. It includes things like controls on signaling, all sorts of things. And the kernel is of course sitting between mediating every request. And the execution is an expensive process. It requires CPU cycles, it requires memory accesses. So we do it in a somewhat restrained way. We don't think every time there's an access, we often do it up front when communication is initiated. And then we allow it to continue indefinitely. And maybe we don't even support for both of you. In the Unix model, objects insist that tomorrow the whole will be owned by a user. It might be a great user. It might be any other user. And we break down protections into two kinds of protection. One of them is mandatory protection and the only distinction between those two words has to do with who is setting the policy. A mandatory policy is administered by a security administrator or a system administrator. And users cannot change those protections. They are universally subject to protections. A discretionary policy is one in which the owner of an object sets the protection. So Unix permissions are by definition discretionary. Anyone who owns the object can change the protection. Maybe someone else can. They could make money. Mandatory policies are very important for enforcing things like information flow for protecting services from each other. Discretionary policies are very useful. They allow users to collaborate and share information. But they're also very unrivaled because you have to rely on the users doing the right thing. They have to manage all these permissions themselves. Users have bad methods of managing files but loan the permissions on your files. So in environments where you have security problems for the isolation of users, discretionary protections are not enough. So a lot of the work we did as part of this project was to introduce mandatory protections. So all kinds of things are typically mandatory in a classic Unix system. The ability to debug processes or deliver signals. No matter what user does, they can't expose the ability to deliver a signal to that process. The operating system cannot prevent delivering signals to processes that aren't owned by the user. Another example in Unix is you cannot delegate the ability to set permissions on a file. That's something only the owner can do. They can do it on half the fault but they're requested of another process to delegate the ability. In discretionary protections, the file permissions are probably the best example but the same kind of permissions can be more so exists for inter-processing communication objects such as system via IPCC. So we'll talk about this kernel which provides a pretty basic set of permissions. Isolation, some notion credentials and so on. We actually build most of what people recognize as the Unix policies entirely in user space. The kernel does not hard-code things, it doesn't use user IDs or group IDs. It simply says they exist and then it enforces some basic protections. Everything else is implemented in user space. So the notion that users authenticate and log into the system, the notion of logging into the system, the notion of network access, these are all entirely conserved for a percentage of the Unix user space and you can, of course, change them than people do. The kernel provides some very low-level primitives set process isolation, notions of credentials object ownership and access control. The kernel knows nothing about passwords remote access home directories or this is user space. So what are some things we find in user space? Authentication and the mapping of communication channels to users. These user IDs by themselves are meaningless. Who do you assign your IDs to? Do you assign them to every task one by the user? Do you assign different new IDs, different tasks? That's all controlled. In the usual Unix model, when they log in, you start marking all that process and do the way the user ID policy works. Once a process is assigned un-privileged UID, you can't easily get back to being privileged. Concepts like PAM are also entirely user space. And you can see the side effects of this when you look at the behavior of the Unix system. There's some odd things happening. If you change the group file, all processes don't magically get updated to have the new list of groups for each user. You actually have to wait until the user logs out back in again before the process is marked. The reason is that the processes get marked when the user logs in, and then there's no change in those identifiers later. There's actually a course of problems, the revocation problem, in which you take someone out of the group, but yet that process is keep on running and keep having access. This is a cache consistency issue and it comes directly out of the architecture. This is a led model. We have a kernel. There's a trusted piece of software that's on top of this led model. There are some obvious limits to this model. There are certain things that are very difficult to deal with in the Unix model and each of the features we talk about are addressed. I'm going to talk briefly about access controllers. Access controllers are actually somewhat controversial. They ship in basically every operating system that's available. They are mandated by the orange book and common criteria. If your system is evaluated for use in any military or government environment, you must have access to control this. As you might have seen, everything from windows to iris produced in Linux. And the details do vary. There are basically two kinds of access controllers floating around. Something called POSIX 1e ACLs and another type called PAPS NTAS ACLs or NFS V4 ACLs. And normally this kind of splits into two camps on this and the unfortunate thing is that they are not really mutually intelligible. You kind of have to pick one or the other and Samba is probably the piece of software that's the best job of trying to make it live side by side and it's impossible to accurately represent the access control concepts represented in one of these schemes in the other one in any human manageable way. There's an ITF draft on how you money do it and you just have to read the draft and you read between the lines and it says don't ever do this. You kind of have to pick one. At the time we implemented ACLs, POSIX 1e was clearly the way to go. All of the commercial unique systems either implemented POSIX 1e ACLs or something that was very, very similar. Different semantics were fundamentally the same. So we live in the world of Linux, Iris, and Slars. A little later Windows comes along and Windows ACLs are different because they're not compatible and those were picked up in NFS V4. Presumably the reason for that was that the people working in NFS V4 believed that Windows was the client architecture choice and the service needed to provide all the clients needed. And now that falls back to S10's ACL model which is also the NTFS ACL model and most recently Sars's ZFS has adopted that and presumably that in large parts of NFS V4 ACLs. Until recently the semantics of those ACLs were very well defined if you looked at NFS V4 specification. The benefit is that, go look at Windows but that has recently been resolved through a lot of work by some to make ZFS work with NFS V4. So that's improved. So we implemented POSIX 1e ACLs and if you use some of the magic mapping takes place it's not pretty, but your end users will be able to manage the ACLs using the NTFS ACL manager over at SNPFS. And for DZ we store access controllers and extended attributes. This essentially means you need to be running the OS2. This is all right, we've been shipping you OS2 for years, it is default. So there's nothing really that you need to do there. We ship support for ACLs in the default in our kernel so if you can store for DZ the support is there. You do need to administratively enable access controllers for each file system you want to be present. As an administrator, you will want to know if it's in access controls and use it because you want to make sure it is back top properly and that your users aren't doing surprising things. If you were at Unix strips to assume everything is just permissions, then you may run into trouble with access controllers as your scripts won't understand them they may get unexpected results. So we were a part of it and we turned it on. We may have some changes to the code. For the Unix file permissions, there are essentially a very small access control list. The owner of the file, the group of the file projects on the apples are an extension to this. They allow you to assign rights to conditional users and additional groups. And the goal of doing this is actually pretty obvious. The problem with Unix model is that any time users want to collaborate usefully in a group, the administrator has to create the group for them. The administrator has to take the special access, explicit action to set up access controls so that they can share data in a meaningful way so that users can say, well this user is allowed to read it, these two are allowed to write it but this group is allowed these special writes. To do it by default, the administrator must be involved in the scalable system administration. That is a disaster. It just doesn't make any sense. So access control lists are intended specifically to address the concern of how do you let users manage the protections of their own objects. And they pretty much do that. What they do is they allow you to add additional fields which is this user or this group has additional rights. That's really quite straightforward. With the exception of the apple mask which is provided for compatibility with missions, other applications kind of behave the right way. I'm going to talk about how the mask should behave, whether it should be conservative with respect to security or permissive, and increasing the systems as part of the permissive. Right now, previously, it does not expect access which is conservative, but for reasons of compatibility and user expectation we're probably able to change. Here's an example where I will, you know, over on the left hand side, walk you into corporate permissions where you can say the user is given an every write access group and have read another can have nothing, but a nice law of apples or the medieval law of apples and ensure things such as specific users can have read write access, maybe you can have read apples. And the mask is a mask of all the rights delegated using user and group of countries other than the file owner. So if you tomorrow file and you change the group fields it will mask the rights assigned to all the other users. So if you tomorrow 06.00, as to say, take away read and write access for other in group using the tomorrow command, it will mask out all other rights delegated using apples and the results will get conservative measures. The difference in Solaris Linux and now IRIS is that they don't take conservative perspective when they're creating new children directories. It has to combine the UMask and the mask will be called Apple and it allows users to essentially override the UMask using an Apple and this is not conservative behavior. There are two commands you use, getApples use readApples, setApples use to write them, you can run an Apple on a file that doesn't have an Apple on it and you get back the normal set of missions, but if you run another file that has Apples, you get all these additional entries and it's pretty straightforward to use and you always have the basic fields from permissions on the other. So applications pretty much use the APIs they used to. The only other interesting work is you have the default Apple on directories. This tells you whether anything special needs to be known with new entries in this application that's the way, that's the way you expect. A couple tools, Ellis extended cars able to back them up, dumbed is able to back them up and actually now Apple is able to restore something about great. And the man, it is a fairly straightforward and these are consistent with anything you find in pretty much any other unique system with the exception of MacOS 10. And this documentation available is just tutorial in the previous handbook and it's pretty extensive man. So what I have to do is give you additional flexibility. One of the big criticisms of Access Control List is the management model. If you thought permissions were bad, wait to use the Access Control List you have these things on every file in your analysis system, every one of them now has extended information. So just like with missions, you end up managing whole sub-trees and setting Apple to all of them. And this is, this is semi desirable. Usually what people do is they sign access controllers to a parent directory for a project and they know that all the Access Control List manage themselves. And in case you get surprise users it helps, there's certainly no different permissions. In particular, security event auditing. I think security event auditing is actually I'm sort of caught between Manage for Access Control and event auditing is being the most useful features that came out of the trust based project with the possible exception of the UFS2 bar system. And the open pam watch I think also is very useful. If you've ever used syslog to try and figure out how to do a personal analysis on your system, you probably find that it's a somewhat limited tool. You can't trust things that you find there. You can insert it by attack is all kinds of things and you're wrong with it. There's no detail. You can't verify what files. Event auditing is really about addressing that concern. The standards for event auditing or orange work and common criteria if your address is evaluated pretty much the requirements come out of those. There are basically three requirements that the logs be secure, they be reliable, they need fine-graining. And out of fine-graining comes the requirement for configurability. Syslog logs are not secure. Anyone can inject log records into a syslog file. Anyone can run a syslog online to one say oh, the kernel says that the following thing happened or SSHT says the following thing happened. In Mac OS 10 they worked a little bit to resolve this. They also have a lot of credential information with log entries running in log files. Unix log files are essentially useless for any reliable personal analysis as written. Syslogs also are not reliable. Anytime your system goes under heavy load the system uses Unix domain softwares between applications. It is intentional and lossy. If your system runs out of disk space or you want to distribute it to the network the network is busy. Syslog is also not very detailed. If you want to know who accessed what files in your system syslog will not be there. And all these things are required for common criteria evaluation. So these come in the form of a new logging system called portage. It's an essentially possible analysis. It's after the fact you want to assign blame. You want to figure out what went wrong and how did it happen. Your php-based web server had someone break into it somehow and you don't know how or how they defaced what happened. Syslog will never tell you the answer to that question. But audit is intended to. These audit is more frequently used in intrusion detection where you need a detailed event stream that will tell you what happened in the system that is software manageable and manipulatable so it's definitely intended to support that. It has a machine readable ball format unlike Syslog. It's also used for live system monitoring and system debugging. If you have a detailed logging system it's great for figuring out what actually happened in a debugging scenario, not just a security scenario. As I said to come up with common criteria the C2 evaluation coming out orange book requires auditing so if you move it windows, windows has auditing also. And recently common criteria has evolved with its requirements a bit. As I said it's found in ownership of commercial operating systems. Audit is probably the feature that's come latest to open source operating systems which I think is actually kind of distressing. The way this works is audit log files are like auto log files. They contain a stream of events in audit files. These are called records and audit files are referred to as trails or a trail of records as the system operates. Each record describes an individual event that looks like a line in the Syslog file and comes with a date stamp and a time stamp and it tells you about the event and food data events can also be attributable that is to say that they can be attributed back to some authenticated user or un attributable which means that no user authentication led directly to the event that you see. This is an important distinction in standards. We also have a selection which has to do with deciding what to log and when and this is important because if you really are able to log every event in your system it's made in how quickly disks build up so you want to decide very early on whether you want to make a decision about whether they can consume IO resources and memory resources and we have a configuration system that allows you to select which users have which events all for them what are the interests of logging. On the whole the security standards require audit events for any event relating to security and they define events relating to security in terms of security requirements and if you look at the common criteria status it says you must authenticate users and if that event falls into one of those categories there's a requirement for it to be audited we are also able to audit a large number of events that are required by common criteria but if you look at what Unix systems work it turns out almost everything has to do with security because you're almost always making a security decision it does the user own the file they're trying to write to are they allowed to read the directory they're about to reform? Any event that involves wireless is a security event in any Unix system so there's all log events Most system calls, for example it's a super user I think even stat checks as a super user comes out of trying to mask the file generation number so that people cannot forge NFS file handles which is I think historically a necessary revolution check but in certain cases it happens so continuous access control checks in the kernel and obviously any user space account also that's like logging in and access control decisions involving setting up groups so user space also needs to generate audit records account management the creation of accounts, the deletion of accounts adding in movie groups or these sorts of things and of course retro audit changes to the orders when we set about two components we were actually implementing on macOS 10 first as part of their common criteria evaluation and we basically looked around to see how are people done already and if you look at what people have done the two de facto industry standards are the way ANT did it and the way Solaris did it and guess which one of those two we picked so we chose to implement some BSM audit trail format which is a well-defined file format for audit trails and their BPI which means that if there are existing applications that speak audit they still do so for example the SSH support for audit that comes out of Solaris works out of the box on FreeBSD so it just happened if you have intrusion detection tools for Solaris and they use BSM audit trails they will generally just work on FreeBSD trails the reliability requirements for audit are quite interesting and it must get records to disk for events that have been selected what does this mean? what means if you're unable to write them to disk then the event isn't allowed to happen so the reliability requirements are quite tricky one of the requirements coming out of the C2 spec was if you're unable to log the event then you must halt the machine so you have to keep track of all the available space for log records so you can see how much space is remaining now we haven't turned that feature on by default but we do actually support it the audit system is able to maintain the amount of buffer of audit records in memory so that if the power fails you have guarantees about how many records are on disk and how many aren't so you can say I'm willing to lose at least no more than 64 audit records or maybe I'm willing to lose several thousand because I'm really interested in performance and not slowing the system down with the rate of IO of audit records and these are quite important requirements because you need these guarantees if you're going to reason about post mortem analysis one of the key features was the data format as present on SLARIS so if you have existing SLARIS configurations it just works and you're basically able to look at users and say well I'm interested in all of the files opened by this user but for this user I'm only interested in log records I only use UNIX productions and file systems to check the log records, this is actually just like system logs so if your root account is compromised and you're only running expressionary transactions not mandatory transactions then your audit log is potentially compromised also this used to actually be quite a controversial point I think it's much less controversial now because a lot of the compromises in systems aren't operating system compromises anymore we've done a very good job of starting to track those down and clear those up, a lot of them are application compromises so the PHP example doesn't involve any compromise of operating system code yet you still want reliable operating system trail so the file format is a binary file format there's also the ability to output you can see a couple of different text formats most recently in 3DC7 we added the ability to output in XML but there's also something that's a bit more breathable and what kind of information do you get well, what kind of event is it something's called using SH what time do they do it, who do they do it as where do they log in from but all the information is available in a possible format from the application so you can actually do the kind of a process to manage for the EXEC event what time did it take place what program did they run what were the arguments to it provides an information on the file that was actually added to it was the name of the file of CSH that would be able to survive where they log in from when they ran with them on so this is quite useful information for personal use so I said these things we log all sorts of stuff we log path accesses we log accesses to the network there's a potential for a huge amount of data in fact terabytes of data are now on a busy system for the auto system so obviously you need a system you push to cut these downs we actually support two kinds of selection one of these is called pre-selection which has to do with what records you generate at the time of the event is occurring so what information do you want up front and immediately, how much iodo do you want to come up and then it's called post-selection which has to do with reducing the logs later do you want to do this because you're interested in archiving different things over time for one week you keep a list of every command run on the system after a month you're interested only in log in and log out information and then eventually maybe you're only interested in system events like read leads and things like that or users added and removed and so we support both of these when you use a logs in we set up an audit stage for that process as a mobile process is derived from it and this is done using configuration files per user so when a user logs in they might get all their xx logs I'm interested for example in my shell server in every command run by an authenticated user so I'm able to log that along with command line arguments but maybe for the web server user I'm less interested in every xx that's taking place and then maybe you run a cron script once in a while that reduces the audit logs that you have today you go through and you compress them as they get psyched and we support both of those models reduction is interesting we have a tool that takes audit streams and they can be files or they can be pipeline and then cuts out records it's a little bit of a grep for an audit trail so you can say finally all the records in this trail correspond to a specific user or a specific group or a specific file which is quite useful one of the things that we've added was in presence in most of the audit systems in commercial operating systems I was going to put audit pipes and this comes out of the desire to do intrusion detection so audit trails are these reliably stored files on the disk that are owned by the audit system they enter into a special partition in a special file format they're controlled by all these login files but if you have an intrusion detection system you may be interested in changing the set of records you're gathering over time you may have a daemon that's sitting there and says oh this user logged in and then they ran a disk and well that's very strange or instead of reliably guaranteeing that they get to disk give me a lossy stream so tell me when you start calling client you can't enter records but don't slow the system down just tell me what's going on and we did this between something called audit pipe which is actually a device in which you open size depth slash audit pipe, you attach to it you can issue all our tools on it to control the set of records that are generated from the system independently of the global configuration or any other pipe and then you're required to sit there looping that's something that's presently smart it's actually pretty well documented we have a lot of band pages on file formats we also have a chamber of chapter on setting it up we also have something called open BSM which is essentially when we started this work it wasn't a different source so we had to implement all the library functions and process the documentation and so on we went ahead and did that it's now still the only available BST licensed BSM file, I actually tested and used it on Linux and it is also hopefully being adopted in the Mac OS 10 in order to keep their organization in sync with ours so this is something that's potentially used if you want to bring audit to another operating system platform and we have an implementation paper that was presented at UKUGP Georgia so including audit is actually a remarkably useful tool I find it incredibly useful in debugging I have used it in most of the work environments but actually keeping a log of every month run for a week or so afterwards in a very detailed way including maybe all the TCP connection that took place incredibly useful for understanding the behavior of the web server so I guess I would recommend it both for monitoring use and also for for DC6-2 this is considered an experimental feature it's not available by the falls in the generic panel and for DC7 it is in generic and for DC6-3 I think it also like to ship it in the generic panel I've also mentioned that there are two things that went on the summer in the Google summer code a distributed work dynamic was written this allows you to manage audit centrally and distribute audit trails across machines you can have a simple audit server that pulls in or do audit trails as they are generated this is part of addressing the question what do you do about the fact that the root user is able to modify audit trails on disk when you move into a central machine that has different security properties maybe they are taking that for archives or they are going to read on the media we also have a graphical viewing and analysis tool now it's a little like Wireshark for audit trails you can load up cloud and you can show me all the records that reflect this user but you now may be sure what files they accessed and etc that's quite an useful tool and that's very much in the data still but it could be quite useful the last thing I want to talk to you about is manager access control earlier on I told you about discretionary access control and manager access control has to do with who is allowed to control the protections historically, manager access control will reflect to the military security model multi-level security in which you are concerned very much about secrecy classified as a confidential all this sort of concept of you can read down, you can read down but you can't read up, you can write up but you can't write down I would say this is irrelevant to the current world certainly with lots and lots of MLA systems but when we use the term manager access control we are actually referring to something more rural we are referring to the fact that the administrator can change the policies on the operating system to control all the users there regardless of whether the users are interested and while we are interested in information flow we are also interested in hardening policies controlling the access users access and files owned by other users in a mandatory way regardless of the mission on the file so there are a couple of steps and I said earlier on there are some research involved here one of the research questions is how do you change the access control policy of the operating system without breaking the operating system hardly and very early on in this work when we were starting to implement different access control policies to previous people we noticed that we were doing one of the same work again you decide you want to introduce a new policy the policy is going to have security labels on files it's going to change access control on the file system and maybe also in software oh when you put all these hooks in place you can put in calls to these new security checks and now we are going to have two policies and we are going to have all the hooks sort of in the same places so one of the design choices was to create a framework that allows you to change the operating system access control policy and that's what the map framework does it is all about local control modules or compile time policies and augmenting all security decisions in the panel they might choose just to implement changes to file systems or sockets but there is the capability there to pretty much influence every access control decision in the panel in practice we see two kinds of policies one of these is an information flow policy this is a ubiquitous policy in which new security labels are attached to objects in the system so if you are going to use sdlinx implement some type enforcement which is a label based policy and then you make access control decisions by looking at the label on the process and the label on the file so that is the kind of thing we are talking about there so we have implementations of several of these we are also interested in homemade policies which are tentatively low maintenance this is for administrators who want to change the access control properties of the operating system in a constrained way maybe they just want to change the rules about who can bind sockets for tcd ok the world web users only allow to bind or ate them so as I said we have a lot of these policies we ship some of the operating system some are available with the parties in the labeled area probably generally of less interest to the general audience we have a fixed label integrity policy in which all objects in the system are assigned integrity levels integrity compartments we also have multi-level security to support for the so-called military model and low net which is a floating label integrity policy which is quite interesting in a partitioning policy we have a control of access control this model which users are allowed to bind which boards we have one for controlling interactions with different users which is a firewall policy for the file system so you can establish a set of rules to control what users are allowed to access files owned by what other users and this is quite useful in ISP like environments where you basically want to assert globally that users cannot read each other's files and there is just no way to do that which is quite tricky to express that especially if you don't want users to accidentally reveal their files which they tend to be very good at doing we also have several third party models things like cryptographic design binary checking and also shared libraries as programs execute we have a port of the SE Linux in type-enforcing policy that runs on 3DSD we have a privileged policy that allows you to allocate operating system privileges in a fairly branding level to specific users there are also a number of products policies running in various products for example secure competing sidewinder viable have their own implementation type-enforcing based on that framework and one of the goals of this project when we did it was to allow people to produce appliances based on 3DSD to customize the security model of the operating system without changing the operating system throughout the kernel without changing the ADS and all these other things so that policy pretty much just plugs in and unmodified 3DSD kernel recently Apple announced they adopted a number of different security policies in any detail as the product is not shipping yet but Spice has had it done some extremely interesting things some things that I never expected would be done with a manager access control framework for example parental controls are implemented using manager access controls very very upset 12 year olds but they also do some other quite interesting things like data tainting and stuff like that it's quite interesting and hopefully many of those policies will be able to run on 3DSD relative to the wireless application let me just give you one example of a manager access control policy that's more generally useful than some of these floating variable policies and this is this user group file system by the way as I said the goal of the policy is to allow you to control interactions between users using the file system so you might have 100 users on the system and you really want them only interacting in 7 ways you want the worldwide web users to read any one of the files but not write everyone's files the student group can never be able to read or write files owned by the staff group on this particular file system so this allows you to do that using a tool that really looks like a firewall management tool you set up a number of rules it iterates over these rules which one is the best matched and there are different ways this can be done and then it goes ahead and imposes restrictions on what are allowed by the missions so that's not one example which is usually IDFW set rule 100 if the process is owned by the good user then you can read you can actually even stack but you cannot write and this will overwrite all the missions in the system it's a quite useful tool there's a lot of documentation of this too we have several main pages just about managing manager access control we have a handbook chapter which goes through some of the basics of setting up a limited number of the policies we also have some implementation papers that go into the research aspects of it and how it's implemented and I expect a lot of you to start seeing a lot more of this now that MACR has stands up and has the runnings that it follows so I kind of quickly went through three of the new security features that appear in previous to either the new with respect to the unique security model and I think when looking at them you can see how they start to fill the gaps left in the unique security model if you recall my detailed log in of the system security purposes I'm obviously unique because of that and it provides that in a pretty neat and compatible way I think it's very pleasing and brave to use the same file format as it is in Solaris and Apple has also picked up the same file format so if you have audit trails from MACR and previous to you, they're all immediately interoperable which is really quite useful and all the command modules to be run on each of these files which is quite helpful. Access control this let's say there are management issues in using access control this in the same way that the raw permissions but these turn out to be very practical tools when you want to have a series of users interact and you're managing web pages, CGI submissions users can now say things like I don't want the directory to be all readable just because I need a CGI right there I want the web user to do it right there and that alone is a quite useful tool the manager access control which as we originally introduced I think mostly to help client spenders who have custom security requirements but it turns out to be a much more general tool right now MACR is not in the previous DGNR panel I have some folks that in Ape will be able to bring it in by default you can find out more about all these things by looking at the previous DGNR book and the trust of DGNR Is there anything on the when logs are done on the other posts is there something to ensure that there are no logs that were changed that all logs are where on the page until there are no logs that have been deleted or do we have to log on the logs that are on the other posts as it stands the policy is that discretionary access control will protect the files there's an audit group as well users in that group to review the files they won't change them if you're the root user that are stored in the past right now we're going to build in facilities to digitally sign the audit records as they're generated I think it would be some very interesting questions about how you would do that reliably on a single system because you have to store the keys somewhere in some consistent form so that you can perform occasional signatures my recommendation would be to spool them out to a second system or to write one to media and if you're going to store a whole lot of write one to media it would be wide and dry as a comment so I think the distributed play is the best way to go one with an integrity policy like the Viva integrity policy you can actually do things like mark the audit daemon and all the audit files as high integrity and even if the user gets a root privilege of low integrity they're not going to be able to access the files so if you're running with a more complex policy then you're able to express more interesting things one of the examples we have of what you might do once in a while we'll rotate the log file so you can say when a log file reaches a megabyte rotate it's not right until you do one terminate the previous one, rename it so it has date stamps and a script runs when it happens so if you wanted to do occasional signing or occasional offload of all the cryptographic operations you could set the interval if you wanted that to happen but there's no real interest in doing that the one thing that the audit distributed audit daemon doesn't currently do is sign them before they do the post I think it would actually want you to add that I think it wouldn't be possible would you just describe some log file did you look at that and would you just describe some log file I'm not familiar with it it's basically it's much like the DGD's daemon rules except it's kind of rewritten to more mature is it able to audit things like file opens I've not looked at it I think it sounds like it'll probably be a good match nothing says you can't use existing tools that manage logs to work with audit you would have to so you can send signals to the audit daemon say okay go ahead and write it you can send it messages to say please terminate whatever file you're currently generating and just hand it over and don't touch it again so if you want to pick up log files using another tool you can certainly do that the volume of audit records is such that we actually directly write them from the kernel to the file so there isn't audit daemon but it's just for management purposes a process that had to receive every message before it can be written out to the file system or send over the network in the common case of physical performance so that's a design decision the other reason it's in the kernel actually the logging and not just the capture of the events is that only the kernel has complete information about how much has been written to disk so only the kernel can enforce the guarantee that the amount of number of outstanding log messages is below a certain maximum so I think that also has to remain where it is to provide guarantees but in terms of managing the log files I mean there are files in the file system so if you wanted to use another tool there's no reason not to do that and if there are changes to be made to the audit and store that kind of thing that would be quite happening Any other questions? I'd like to have some kind of mechanism that extends this thing to let's say network traffic I want to see that you would be able to say in some kind of policy everything that comes out of the internet is not allowed to rest further into your network so have that some kind of tech entanglement So you do have a manager access control in fact we already implemented a policy that can do that if you use for example the integrity policy you can mark one network interface as being low integrity and the other as high integrity and no traffic will be allowed to pass between them unless it goes through a trusted process that performs essentially a change in classification so you can have a series of trusted processes on the system that have labels that allow you to do that and then a series of untrusted processes on each side that do regular things and an example one of the common things to do with type enforcement which is the policies by asking the next is something called a trusted pipeline where you label the two network interfaces then you have a set of processes that sit between them passing the data through and they're all labeled such that each one can only pass it to the next stage of the pipeline so you have something of systems on the socket and then you have an HTTP on one side and HTTP on the other and then you have some scrubbers and a catching piece and some access control pieces and an antivirus piece and each of them, the middle pieces like antivirus can't connect directly to network can only talk to the two sides of the pipeline so if you look at a product like security video and sidewinder they actually do this in the process and you can certainly use the tools that we provide to do exactly that Any other questions? Thank you very much