 Welcome to the annual DEF CON convention, this meeting was held in exciting Las Vegas, Nevada, for July night, for the 11th, 1999, the SD to set up a prior role in gateways. If you missed maybe like you're here sitting listening to me talk about gateways and actually secured servers because you're dealing with like pretty much a lot of the same concepts by that point and the target implementation is with net VSD on the grounds that that's what I'm running, that's what I can field questions on and it runs on a lot more platforms, stable under one source base. You can't see it, but it's a work in progress. It's a work in progress, which means that things are still under development, I'm still working on scripts to automate things, still fine tuning and getting the examples set up and eventually this will all be on a website so you can take a look at how this is all laid out and you can actually really easily see the example code, example configuration. Yes, VSD is a Berkeley derived code. It is open source. It is underneath the Berkeley copyright, not the Canoe public license and I actually prefer the Berkeley copyright versus the Canoe public license just due to the way the Canoe license is set up as far as getting other companies interested and so forth. Does that answer your question? Theoretically, a lot of the concepts are the same. How you might go about implementing it may or may not be the same. I'm not, when I say it's targeting that VSD, that's because if you ask me about command line options and so forth and you're running like Pico BSD or something and Pico BSD doesn't have the dash QZZ option of some command, I'm not going to know how to do the equivalent because I don't run Pico BSD. That's why the slides actually say net BSD because that's the platform that I actually did this on. No, this is actually, actually hold on, let me see what I can do. This is such a hack. Okay, firewall versus gateway versus secure server. Basically, this is all depending on the purpose of the machine. One of the key things you want to avoid is overloading the purpose of the machine. If you're putting a single machine out on the network, don't necessarily want to have that one machine do everything. You basically want to isolate it by task, if at all possible. By the time you get down to a personal server trying to locate like four or five machines out on the net simultaneously, each doing their own function may not be feasible. Actually, some of the topics I cover help isolate down multiple functionalities on the same machine and protecting individual functionalities from each other. But generally, you want to try to avoid overloading a particular machine such that if something gets compromised in one area, you lose the whole thing. As you're building these machines, yeah, you can use net BSD for firewall. If you're a corporate entity, you've got to look at various threat levels and the cost versus the threat. If you've got a lot of information, we have a lot of reputation capital that needs to be protected. You probably don't want to use any UNIX-based firewall. You probably want to give Cisco a call and go out and buy a PIX. Basically, because that's all it does. It protects your machine. It's a firewall. It's dedicated. And if you're a major corporation, you may want to go that route. It's all based on cost versus threat models. Basically, most people think of firewalls as you're out there protecting some machine or some service on the inside from attacks on the outside. So you're expecting people on one side coming in and attacking on the inside. There's no whiteboard. In order for people to actually get work done when you've got a firewall in place, you've got to allow various services to be able to pass through that. If you don't, people are going to attempt to find ways around the firewall and open up other security holes which basically undoes your carefully laid plans. Users are very tenacious when it comes to what they want to do. So especially in a corporate environment, you've got to have mechanisms in place to ensure that what you're doing and what you're blocking and what you're allowing is what the users want, need. There's times when, yes, users may want a particular thing that you have to say no. You need to have a corporate policy and a fact that says you can say no, otherwise you're going to try to go around it and there's nothing you can really do. Basically, you've got this machine sitting in between two networks and there are times where you need to allow the inside to get out into the open area and sometimes you need the outside to have the ability to get in. If you've got your personal machines fireballed off on the internet, it works really good when you're at home sitting right in front of it. But when you go out and you're roaming, you've got your laptop and you've got your ricochet or other wireless or remote connectivity means and you suddenly realize that you've left a phone number at home and it's sitting on your machine at home. You tend to really, really want to be able to log into your home machine and get at that number. So there are mechanisms to allow you to get back into your machine behind the firewall securely. A lot of times if you've got multiple machines on the inside, you may want to set up proxy servers. Most common is the web proxies. This also helps with caching and so forth and a lot of times most corporations will locate this on separate machines to not bog down the actual firewall machine itself. As home users, you can consolidate this into one machine. Especially in the case of home users, unless you've got nice fast cable modems, you may want that caching if you've got a lot of people on the inside hitting that same site. There's a couple different levels that we're going to talk about as far as dealing with what you're firewalling. Okay, who did that? Here's some routing you can do to help you protect against firewalls. Protect your internal network through the firewall. You can have your firewall routing different packets into different networks and ensuring that that traffic goes to this destination. Filters, you can start protecting things at the IP level, not allowing certain traffic to go to certain machines on the inside, but allowing the rest of the traffic to go to other machines. This basically cuts down on the number of attacks people can hit on those machines. In the case of small corporations, small offices, it will help prevent individual users from starting to run their own programs and their own machines that allow access into your network. A common thing is for people to run remote access servers on their machine, not tell anyone in the IS department just so they can log in from home. Same thing applies to people setting up modems on their personal work stations to allow people to dial in that immediately bypasses any firewall you can put up. That's really dangerous because it doesn't take much now for a user to get something like that up. It's a few mouse clicks under windows and suddenly anybody can dial in to that work station and have access to your internal network and a firewall will do absolutely nothing to protect that. You can actually have TCP wrappers to watch what's going on and determine what's connecting to what service that BSD actually has as TCP wrappers built in. It's something that comes with the VLS as far as filtering. IP filtering is built in and ships with the VLS, so it's like right there out of the box. You can start writing rule sets and so forth to actually deal with that and you don't have to worry about downloading additional software. It's all right on the box from the beginning. By writing rule sets you can work cut down on the number of attacks. Scanners, it is actually possible to go out and get software that will look at TCP streams coming in and analyze them for certain textual sequences. A lot of times they will look for the string root kit and set off all sorts of alarms and kill the connection. So basically if they can see it's like tar, XF, root kit, it's like wait a minute something's wrong here, someone's unpacking a root kit from a remote site on an internal machine, kill off this connection, log the attempt and notify someone. So you can actually, there's actually software out there that will do real time analysis of the streams going through a machine. Personally I'm opposed to that just due to privacy considerations and if you're running SSH like you all should be, your streams going through your firewall are encrypted anyway, so running that sort of scanner's not going to work. Logging is also very important because we can collect and look at and have available at our fingertips all the data that's going through the machine constantly, but if we don't do anything with it, if we don't do the correct things with it, it's absolutely useless. So the key point is if you log to the machine itself if it ever gets compromised, people can mess with the logs. So what you want to do is set up another machine internally, highly locked down, that is basically a log server. The only thing it does is it takes syslog messages and writes them to disk. I've actually seen cases where they've gone to the extreme since syslog is UDP based and doesn't require an AC. They've actually taken machines that are plugging the network via the A UI connectors, the old start connectors, and have cut off the transmit pin from the machine so the machine could not possibly talk back to the network. So the only thing that can happen is that machine can receive UDP packets. There is no way to really break into a machine like that because the only thing you can fire at it is UDP. You're never going to get a TCP connection. You're never going to get a response back. So you're really, really limiting what can be done. The worst case in that is someone can attempt to try to fill up the log disk on that machine and then try more malicious attacks. But disk space is very, very cheap. You can go out and get 10 plus gigs for under 200 bucks. So trying to fill up that much space before someone notices is very, very difficult. But the main point is logging is very important. Routing. A lot of sites will only need to set up static tables as far as the routing of their internal network. Most people will only have one connection to the outside world and the static tables are fine. If you're on a larger corporate style network we may have multiple paths to the outside world. Maybe running dynamic routing protocols and one of the key things you got to watch for is various poisonings of your routing tables. At home actually had a little bit of a problem in Fremont a few months back when a user broadcast a VGP packet out to the routers that basically told them to get to anywhere in the world, you go to that machine. It's pretty much shut down Fremont. Network address translation NAT is very common amongst home users and small offices turning a single IP address or a small handful of IP addresses that were given to you by your service provider into a fairly large pool. In my bedroom I've got probably eight or nine machines. My network connection they've given me one IP address. It's working just fine for me. Had no problems with it. Works great. And you can actually deal with real sets. I'm so dead by the end of this. Yes. Yeah. It's a little bit more difficult but I mean a lot of the cache poisonings I've seen it's theoretically possible on a single pipe out. Actually probably was I don't know what set of slides I actually used earlier but I've worked with VGP and Pancas on presentations before and I've seen a lot of those slides and it's possible with a single pipe. I don't know if I can dial up or something. Is that going to put a load on your server that you might experience? Not really because with NAT you're only changing a few bits and you're doing some checks and recalculation at one level higher but you're doing a lot of that calculations and twirling of bits next layer down anyway because you've got to drop it out on the wire on the internet so you're writing a new MAC address into it so you're changing fields anyway. So it's not going to... Yeah. It's work to be done so yes it's going to add a load but it's not all that significant. I've seen 486s doing IPsec that are saturating 10 megabit lengths so the grant that was like single does but you can't buy a 486 anymore so anything you're going to find now you shouldn't have any problem even when you start trying to do IPsec and encrypting everything. IP filters, this is the core of what you need to do in order to protect a machine. You've got to have a really good set of filter rules to make sure that what's getting through is what you want to get through. Basically if you've got a set of IP filter rules in the file that contains those rules about half of those lines should be rules. The other half of those lines should be textual comments as to exactly what you're trying to accomplish with those rules. I've seen instances in corporations where people have, in different departments have requested I ask, oh well I need this port open and IS has gone in and opened the port and the need for that port being open dies off after several months. IS has never notified that that port is no longer needed open but IS didn't document who requested the port to be opened, why it's open, so it gets left in there for basically all eternity. So if someone sits down and says okay fresh start, we're going to lock everything out and wait for people to complain. This usually gets IS into a lot of hot water in I'd say 99% of the companies because a lot of times those security groups just don't have the political power within the organization to be able to stand up to marketing. The company is driven by marketing and a lot of times they can, oh well we need this and management will say okay well you guys make this happen, allow them to do this and a lot of companies just can't stand up to that. But your IP filter rules are really important in what you're protecting and how you protect it so you really have to use those to lock down your system. Basically the rule of thumb is deny everything and then only allow through what you want to allow through. A lot of rules that's out there that are attempting to like block certain ports. If you know what you're doing you can get away with just blocking certain ports. However the safest bet is to deny everything, pass through what you want, pass through. Like I said, actually we jumped ahead of a slide there. Like I said comment what is allowed when, by whom and any authority that said this has to happen. An additional point is note that if that rule is ever to expire that is if we need this port open from now until next February. Note that it can be closed off next February. That way you don't end up with this huge file with all sorts of ports open and nobody knows how long it's supposed to be open. You can go back and deal with cleaning up your rule sets. You also want to make sure that when you're opening up a port you find out what that port needs to go, what machine it needs to go to. So you don't want to open up a port on all the machines going through because that can be used to attack other machines that may not be running that server or maybe having other resources and other types of servers running at that port address which can then be attacked because one group needed a port open for only their servers but it was opened up for everybody's. TCP wrappers basically allow you to deal with monitoring who's connecting to what and dealing with services and basically give you another layer of denying control. This is at the next layer up. Basically you're trying to establish a connection rather than just looking at the raw packets. This gives you better control over application based security and wording things. Whether you write write rules, write rules, let's deal with rules on a TCP app or whatever, just logging what's happening versus blocking ports. It's up to your particular implementation and what you feel more comfortable with. Yeah, that would be one way of putting up because it's assuming that all your, well at least some of the BSD kernel, all your routing and your filter rules are happening in user land that may, I mean starting to kernel space and other implementations, some of those packets may be popping up into user land. If you know you can deny connections, you want to get them, those packets, out of the mix as soon as possible. It's less work for your higher up layers if you can just eliminate them as soon as you know you can eliminate them. As we got scanners looking for bad things happening, a couple of products I want to mention that's out there, port sentry, which was formerly abacus sentry, can set this up to start watching people trying to connect. This is working off of your IP filter logs. So if you don't have services running on 65,531 of your ports, but you have services running on just those last remaining four, you're going to want to be able to detect if someone's trying to scan you. You don't want to wait until they stumble across those four ports that you actually have something bound to. You want to see that, hey, someone's trying to pass packets on all these other ports and trying to find out what's open. So software like port sentry will help you determine what's going on. Tripwire is actually software that tends to run on the machine itself. It'll allow you to determine if the machine has been compromised by doing checksums of the binaries, looking for things that have been replaced, trojaned, et cetera. Content scanning, like I mentioned earlier, we're looking at the individual streams, looking for people typing suspicious or malicious content through the wire. Usually these things are fairly flexible in setting that up so you can determine what exactly you want to look for. And it's not limited to just interactive telnet sessions. You can scan basically pretty much any IP stream looking for this potentially bad content. Logging, like I mentioned, the best logging is both remote and secure. You want to be able to protect that logging machine because generally you're not going to set up a logging machine for every machine you want to attempt to log to. So you're going to be coalescing a lot of your logging functions onto one machine, so you want to make sure that that machine is really, really secure. I've got a machine called... Sorry? Go ahead. Yeah, you can actually tell SysLog to log to a remote machine, and that's a capability built in. There is secure versions of SysLog out there that add additional layers, so if someone else does manage to compromise an internal machine, they can attempt to set up a sniffer or something. They can block out... There we go. It's a little strange to be jumping that far ahead. Providing services. So you've got machines out there that need to do something. Aside from just having your firewall to keep protecting your network, you do need to have machines that need to talk to the internet as a whole. Web servers, mail servers, FTP servers, et cetera. So in order for any site to be useful, you've got to be able to offer services and you've got to do something to be able to protect them. Web is basically broken down into two categories. The web service itself, where people browsing the web will connect and get pages down, and the other half of that is content management. You've got to have some way of dealing with the web content that's there, and a lot of times you want to break this off into two categories as far as how you want to to lock down the permissions. Interactive traffic is a hard one to nail down because you're opening up the system to allow people to log into it, and one of the reasons you may want to do this is for content management of a web server or an FTP server so that people can update it and keep it fresh. You can deal with content and other logs. The other thing that people may need to have a direct account on the machine is for them hopping out either from inside the firewall onto the firewall and then out to the outside world, or in some instances you may need to allow people in through the firewall. So being able to access resources both on the outside and on the inside of the firewall may need some sort of an interactive capability. Administrative considerations, basically you've got this machine there. It's going to require some administration to deal with it. You need to be able to protect the access that's required to actually administer the machine since administration requires additional privileges. You've got to be a lot more careful about how people who are doing administration of the machine are going to do it. If you've got a large machine or a lot of machines, what you would need to do is decide on, with multiple administrators, decide on who is going to be doing what and what they're actually going to need to do in order to get their job done. One thing you want to avoid in a corporate environment is role-oriented or role-based accounts, basically an account called Web Admin. If you have like one account with us called Web Admin and you've got five web administrators, you really don't want to have all of them trying to log into this one account with one password because now you've got five people with one password to one account. If any individual writes it down on a post-it note, loses his day-runner with it in it, it's harder to track how that password got compromised. And there's also no accounting of who did what. If someone decides they want to reorganize the entire website, there's no trail saying oh well, this person came in as administrator, changed all these directories around. If you've got an administrator who has multiple duties and you've decided that he needs multiple access on Web and FTP, but you've got other administrators who only need Web, only need FTP, one solution is to provide them with their own Web administrator account. Then another FTP administrator account. So the admin who has to manage multiple services would have multiple accounts so that he knows that he is logging in to do this function. And it's a little bit more work for the administrator to deal with the additional passwords, but at the same time he's not likely to accidentally do damage to another service area. So once you actually have all these accounts, one of the things that you're going to want to do is reduce the privileges down to what is absolutely needed to get the job done. All these people who have their own accounts, those should not all be root equivalent accounts that defeats the purpose. This is a little harder in Unix-based systems than systems that have B2 level security where you actually have a capabilities-based permissions model where you don't have a root account other than an account called root that just happens to be, by default, assigned all priorities. Some of those higher-end systems have need accounts like backup, which basically gives read access to all files, write access to tape drives, and that's it. They can't change or modify or create any file they want. They can read anything. They can write to a tape drive, and actually they can read from the tape drive and write to a restore area, and that's it. Unfortunately, this hasn't gotten popular amongst even most of the commercial Unixes and hasn't hit the free OSes yet. It is something I would like to see because it vastly improves security because you can really lock down who can do what. Another thing that you're going to want to pay attention to, and there's mechanisms to do this, is compromise protection. This is an era where we've got a lot of free software out there, and by that same token, we can't always guarantee the code quality, so any given version of Apache may have a bug that allows YouTube to compromise the root onto the system. Sendmail is notorious for having this problem where you can break Sendmail and get read access. The problem is once you've got read access through Sendmail, in most installations you now have read access over the entire machine. So as you're reducing your privileges, you have to understand that there's faults maybe way too open as far as what permissions are set on what directories and how things are set up. This is done a lot of times because a lot of the newer users are expecting to be able to do things easily with their own accounts, so rather than really securing a system and getting tons of email from new users saying, well, let me do this, I've actually seen instances where people from various OS camps have opened up the security on the OS just to stop getting emails saying, well, let me do this. Unfortunately, that's not the best route. The best route would have been to educate the users and provide more upfront information saying, hey, you've got to do this and this first. So the real big things that we're going to need compromise are web and your mailers. FTP occasionally has had some problems, but it's not as much. We still want an attempt to do some compromise protection for web and Sendmail. So how do we do this? There's a wonderful, wonderful system called CHROOT. And with it, we can change what the system will allow the route of the directory tree to be. So basically, we've got four, five key areas here. We've got user space, basically areas of the directory tree that users need to be able to access, or various applications need to be able to access to do their job. We've got things in slash dev that allow things to communicate within and out of the system. We've got shared files, which are basically files that are needed by more than one service, such as if you're providing web, the web server needs to be able to access the files, so does the people who are administrating the content. VAR is needed in a lot of instances, because you've got your mailers who are pulling mail in and they want to be able to write to VAR. And at the same time, you've got people who are reading mail who also want to see the same VAR school that the mailer sees so they can actually pick up their mail. So a neat way of getting users into these CHROOTed cages is by starting up SSH in its own CHROOTed area. This will allow any user who logs in via that particular SSH demon to end up in the CHROOTed cage, which is then very carefully built to only put into that cage what is necessary for them to do what they need to do. To get things into these CHROOTed cages, especially like in the shared areas, our goal is to protect the outside from being able to access it. So we've got to take care to deal with permissions, but the user IDs and the group IDs, the way we get things into these cages is their actual disk on disk storage lives in another portion of your directory. What you'll do is make local mounts from that point in the file system into the CHROOTed cage. Some links will not cross a CHROOT boundary, but if you mount something inside a CHROOTed area, it will show up. By doing this, we can control exactly what any individual cage sees. And as we're mounting these files systems into these CHROOTed cages, we can take advantage of a lot of mount options, such as not being able to exec, not being able to make device nodes, not being able to run set UID binaries, not being able to allow core dumps, so forth, basically all your mount options. So you can really lock down the web as possible. If you're mounting your binary tree for users to execute certain programs, you may want to mount that weed only, so that if the cage does get compromised, they can't change any of the binaries. CHROOTed is actually a system call, so it's down at the kernel level, and basically it doesn't allow you to traverse above a certain point, wherever you CHROOTed into, you'll give CHROOTed an argument of a directory and then an executable, and the executable is relative to that directory. The program that is then fork exec off of that, its root, as far as the root file system is concerned, is the directory that you pass as that argument to CHROOT, and it just can't go any higher than that, and so it's locked into that portion of the directory tree. So, now we get to... The PowerPoint slides will eventually be available as soon as I can convert them into a real format, and they will be available, actually, on this website. Unfortunately, I got caught up with my real job, and I spent the past three weeks writing and then giving two courses on a particular API set that I wasn't planning on, and basically that consumed three solid weeks of my time, which I lost a lot of sleep over. SDF-1, any anime fans out there may recognize SDF-1, so I actually have a machine named SDF-1 that is sitting out on the net, and on it I've got right now a web server that is managing three domains, and each one of those has content that is in varying degrees of completion, and due to the fact that I can completely overwork because work likes to just hit me with can you please design a course and teach it next week? I've gotten a couple of friends, or suckered a couple of friends, I hope they're not in here, to actually do content management for a couple of those sites, so I then realized, okay, I've got to let these guys be able to log into this machine to be able to edit these web pages. My original plan was to create all the content on my machine at home, and have a really simple script that allowed me to SSH in with RSA authenticated keys, so it's basically a good mutual authentication, push a new tar file of the entire web content up, and a script would run that moved the old content out of the way, dumped the new content in, and everything was happy with no user intervention, or logging in, really limited ability. Unfortunately, I don't have that type of time, so I had to let them log in. And of course, they do not want to sit there the entire time. They do not want to sit there the entire time and use VI to create the content they want to be able to create this content in the comfort of their own home, so I had to provide them a way with getting the content up to the machine. And they're unfortunately all on Windows machines, which doesn't have a nice neat, secure copy interface installed, so I had to provide them with FTP, and the last thing I wanted was them typing in their passwords to FTP, so okay, anonymous FTP only. And this was done to allow them to do content management, so the only writable areas are some hidden directories and they're incoming. So in order to accomplish all of this, I set up several CH-rooted cages, one for the web, one for users, one for FTP, and basically inside the user's area, when the SSHN, they end up in a CH-rooted cage and mounted in from another portion of the tree is slash HTML, which is the web content. This exists on another portion of the disk. It's mounted into that area, so they can go ahead and access it. The FTP, which is sitting off in its own CH-rooted cage, the directory that they're FTPing into, that is mounted from that CH-rooted cage into the user's CH-rooted cage, so they can access it. Since I've got to administrate the machine, I've got another SSH running at another port, so that will let me log in and actually get to the real root, so I can really log in and deal with the administration of all of this. With that, I can control and set up these very CH-rooted cages, but the fact that I've got two SSHs going now, two SSHDs going now, I just have to start worrying about dev entries because the way dev permissions are restored is they're set back to 666 for some bizarre reason. Unfortunately, there's not one place that I can go in and tweak the code to say set the mode to zero when you deallocate a TTY pair. It seems that every single demon does this itself. I looked through the SSH code and saw where it did it there. I looked through Telnet and saw where it did it there. I looked through Login and saw that it did it there and went, this is not good. This means that I've got to make changes to several demons every single time I want to upgrade to make sure that TTY permissions are reset correctly so that as I'm going along and people are logging in and out, as I log in as root in one domain, that someone logging in as a user and that CH-rooted cage and sees the dev entries there can't mess with them. So having multiple device nodes was not going to work because of these permissions. So what I ended up doing is creating a sub directory under slash dev that only had hard links from it into the TTY PTY pairs. Hard link into standard in, standard out, standard error, dev zero, dev no, and dev random. And then mounted this sub directory which had a very limited set of dev entries into the CH-rooted user's directory. This allowed the CH-rooted user's cage to have access to the dev entries that it needed but not all the dev entries. So even if someone was able to get root in there, they weren't able to make the dev entries to create dev entries for the hard disk. And basically, you can mount those, the file systems in with the appropriate permissions but set so people cannot deal with creating the dev entries in the slash dev. Mount all the other file systems, no devs so they can't create dev entries anywhere else. And you can really lock the system down and prevent people from making changes even if it was compromised. For situations like FTP, I don't remember exactly what version of FTP I've actually got running, but most of the shell code out there for FTP experts assumes you have a bin SH. There is no bin SH in that CH-rooted cage. You're not going to be able to exact it. It just won't work. The only thing that lives in that directory is the FTPD binary itself. So unless you want to write an entire shell in shell code, even if you can find the buffer overflow in that version of the FTPD, it's not going to do you much good. And even if you are able to break root through FTPD, you now have access to the FTP files. And if I'm serving any FTP out, you're not going to be able to compromise those because those have been mounted weed-only into that CH-rooted area. So the capabilities that you're left with after compromising root are really, really limited because of this extra effort set up to create these particular cages and lock what resources are available down to only what is needed for that service. And the same things can be done for, for send mail, for your pop, any other internet-related services that you need. You just duplicate these CH-rooted cages, on out, and only provide binaries that are absolutely needed. In fact, I haven't had actually any of my users complain yet, but they can't even do a who to find out who's on the system because who doesn't exist in that CH-rooted cage. And even if who did exist, there's no DevKM. So even if they copy in a who binary, it's not going to do them any good because it's just not going to work. And by those tokens, you can really, really keep people out of what you don't want them to do. And if you ever do need to administrate the machine, you've got an entirely different port set up where you can log in as an administrator and do the things that you need. And if we flip back real quick to those IP filtering rules, since you've got a different port lock off for administrator, you can control from what sites someone can log into that port. So basically you can say, I can allow users into the user CH-rooted cage from anywhere. I have no control as to where my friends may actually want to log in from. Or I can limit them to logging in from machines I know they have accounts on. But for logging in in that real root of the directory tree, I know that I really only want people to get in from my disordered account, my remark.ordered account, and maybe my DSL address. So I can lock it down to those three systems and prevent people from getting in. Or I can just leave it as if I want to administrate the machine, use console and just not allow any network access to the root of the tree, but at the same time we still have people able to log in to do work. As I mentioned, we can scan for activity of maliciously users attempting to do things even if they have an account. There's a lot of ways you can detect this. You've got scanners walking down what executables they can actually run. This kind of reeks of Big Brother really hasn't actually even endorsed this. If you're working for a corporation that has a very well written security policy saying this is what you're allowed to do, this is what you're not allowed to do, and if you do something you're not allowed to do, you're fired. We have a better chance of getting away with scanning the interactive traffic to find out what people are doing and logging, shutting down connections as appropriate. Other neat features I've seen in the B-level secure system is the kernel keeps track of what network interfaces you're talking to. And if you attempt to bind to more than one network interface, you can detect this and kill off that process and then log who, what, and when attempted this violation. So basically, if you've got a system that's a firewall and basically you're authorized to log into that internally to manage some content, you're coming in from the internal address. You can go in and say, okay, I can log in, I can do my work, but you wouldn't be able to set up a demon that listens on both the internal and the external to allow it to pass any sort of traffic. So you'd always have to do this. If you're trying to move any content in and out, you'd have to do a sort of a bounce, move the content onto the machine and move the content off and you've got better control over what's going on by adding that additional restriction. Other things that may want that could be done in the future that just aren't really set up smoothly right now is additional login restrictions where you're controlling who can log in from where, when. Basically, time-based, location-based logging in without filtering at the IP level. Basically, you say, okay, it's okay for Joe to log in from home after 5 but before 9 a.m. No problem, he works from home at that time. It's cool, but we don't want to let him log in remotely from 9 to 5. We want him to be here in the office doing his job. So that's an additional direction for the future. Any questions? Sorry. So, back there. I'm on the initial stages of packaging this for general consumption. The machine that I described, SDF-1, it was set up this way. It's been set up that way for quite some time now. And I haven't had too many user complaints. The only thing that has really come up is it would be really nice if we had Pico. So that's been the only comment on that. Any other questions? Occasionally, you enter a situation where your firewall just isn't enough. So you really have to take it to the next level. This may be a little hard to see, but if you talk to John afterwards, that's a GE minigun. And... That's a really cool drawing. You've probably been going back a bit if you actually fired that, but I'd like to see it. I'd like to see the malicious user he was targeting too. So if you're interested in the shirt, back to John. Any other questions? I would compare it as Judaism, Hinduism, Christianity. It gets down to a religious war. I've had lots of luck with NetBSD. I've been running it since I'm probably 93, 94. I probably still have 0.8 install floppies floating around, which is one of the first publicly consumable versions. That was well before the openBSD split. NetBSD runs on currently 16 different hardware architectures, so pretty much it's almost... It's got a 32-bit CPU or greater and an MMU. You've got a good chance of it running or getting it running without too much work. As far as freeBSD, their goal used to be the best optimized free version of aBSD for the internet platform. Linux, unfortunately, it's just way too diversified. Some of its strengths have really contributed to some of its weaknesses. Yes, it's got a lot of drivers available for it because you've got a lot of people writing drivers for it. Unfortunately, you do have a lot of people writing drivers for it, so the code quality may not be the same across the board. From the firewall standpoint, there's a lot of drivers sharing between freeBSD and NetBSD. OpenBSD is in there looking at the same code as well. They're all BSD stack derived. I don't know what Linux did to the BSD code when they moved it into their kernel, but they've been having a lot of problems with it. I know NFR was having issues with running on Linux because the stack just wasn't up to it going at full speed, so Linux may not be the best option in light of that. I haven't had a chance to actually test that out, but that's something I have heard. But there were issues with the Linux stack, so to each his own, but that might be a potential issue. Let's sort of answer your question. Anybody else? Questions, comments? More beer? Okay, thank you. If you have any additional questions or longer questions, feel free to come on up.