 I'm, my name is Tep, that is my real email address. If you want to send me a note, it's been getting a lot of spam since the 90s. So just put DEF CON in the subject line. I'm gonna talk about some ancient history. I'm gonna be talking about what it was like to build secure systems for the government, the NSA, in the 80s. I've done a few things off and on. I wrote my first paper on computer security in high school. I wrote a paper that said sometime in the future, companies and the government would be able to manipulate millions of bits of information. And that this would mean that privacy in computers would be intertwined for the rest of our lives. And I got an A minus. She said, love the writing, beautiful, I just don't buy the premise. And that was 1970 something. Also, if there's anyone here from NSA, I apologize for the security stand down of system in in 1984 and 82. And if you can prove to me that you were there, I will buy you a beer. I was young, I needed the gold. Along the way, I worked on Multics. I worked on Geco's. I worked on the Worldwide Military Command and Control System. K-Sauce, which I'm gonna talk about, the Kernelized Secure Operating System. I worked on the Spadoc, which is the system that NORAD uses to track bolts and other things in orbit. I was at SDSC during the Kevin thing, but I'm not the Tom in takedown. That was the Tom who was the security guy before I came along and he and Satomo did not have a good relationship. I've also done work in Critical Infrastructure Protection, co-founded HTCIE San Diego and a bunch of other stuff. So, here's what we're gonna talk about. Since I don't see it there, I'll have to see it here. I'm gonna talk about the 80s. How many people in here were actually alive in the 80s? I'll get to that. Wow, okay, I know that Eusnix and Lisa has been getting a little gray over the years and I just noticed that DEFCON's been getting a little more gray and a little more mature. I stepped into the elevator last night at some time and two young guys with DEFCON badges said, excuse us sir, so that really hurt. So, I'm gonna talk about the 80s, primitive computing as my boss would say, steam powered but not quite. What the community was like was very, very small. It was about the size of the first DEFCON, the size of the community that I dealt with. What were you doing in computer security research and the state of the art? Gonna talk about Ksauce, which is the system that I worked on the most during the 80s and then some lessons and predictions. We really got some things completely wrong, but we got some right. Anybody recognize this? Berlin Wall, when did a Berlin Wall come down? 1989, so DEFCON cover was mostly gone by the time I grew up, but I do have older cousins who told me about DEFCON cover and when I was in high school, even though we all bought the brand new latest SR10 and SR51 calculators, we all still learned to sling a slide rule just because we had to be prepared for the fall of civilization. We were mostly kidding, but not completely. That was the mindset. Anybody recognize this? Right, doomsday clock, you need to concern scientists. So in 1983, 1984, the clock was set to three minutes to midnight. So during the time we're building all this stuff, the doomsday clock was at three minutes till. That was the closest it had been to midnight since 1953 when the USS and the USSR both tested thermonuclear devices within months of each other. This is the mindset, this is the paranoia, this is the we could all be gone tomorrow. So secrecy, we did a lot of it. Anybody recognize this book? Right, so Puzzle Palace came out in 1982 and it pretty much forced the NSA to admit that it existed. And this was about the time that the DES was published and of course no one trusted them because number one, they were the secret government society or group that no one had ever heard of. And number two, they wouldn't say why they changed the S-boxes. And so a few years later we figured out that, wow, they really knew what they were doing. But for a very long time nobody trusted them. How about this book? So here I am, I show up the first day of work, January 3rd, 1983. I'm the only, number one, I'm the youngest person in the office by a good 15 years. I'm also the only person there who is not either ex-NSA or ex-Navy Siggant. So when these books came out, everybody was hiding in their office, I was reading them and going, holy shit, somebody's going to jail. Oh, somebody's really going to jail. Yeah, so that was my introduction to the world of secrecy. This, that's what, think about this. The Hunt for Red October was the very first non-fiction book published by the Naval Institute Press. I understand that James Banford is here and I'm hoping that someday he'll get to meet him. So here's what big computing was like. IBM and the Bunch, Burroughs Univac and CRCDC, Honeywell. Several years before it had been IBM and the Seven Doors when you added GE and, nope, they weren't big enough. It was very univac when they merged. But still, IBM and the Seven Doors, IBM and the Bunch, and they pretty much controlled everything. Big mainframes, millions of dollars, and every company could have one. Not so fast. Anybody recognize that one? Yeah, sorry about that. Didn't mean to cause any flashbacks. MS-DOS 1.0 1982. But it got better. This guy came along. And so 1984, 128K and a floppy disk because no one will ever need more memory than that. The original Mac was not so much a success because in 1985 Jobs departs Apple after a power struggle with John Scully. How did that turn out for Scully? On the Unix side, things are looking pretty good, right? We've got Bell Labs. We've got Unix has escaped from the labs and it escaped to the government contractors and the universities. The dot-coms still didn't have a clue what was in their future. They were still chugging along with VMS and AOS and all kinds of stuff. And then their departments, their R&D departments, decided the bean counters won't let us buy a mainframe but we can buy something smaller, okay? So, ha ha ha. Okay, who has not seen this movie? Yeah, think about this. Here's the kicker. This movie sets society and law enforcement expectations for hacking and what hackers could do for a decade, okay? 2,600 comes out. DEF CON is still 10 years in the future. So, yeah, thanks Hollywood. I would like to play a game. So, as I said, R&D researcher, you know, R&D groups, small groups were deciding that they could buy this thing called a PDP peripheral data processor because it wasn't a computer. So the bean counters and the MIS departments couldn't say, we buy computers, you don't. This is a peripheral. It says peripheral right in the name. You can tell it's the 80s. If it isn't the rock and miniskirt, it's the guy's sideburns. But this was how things were going. And then we got this. If you were on the ARPANET and you had a Vax, you put the name of, you know, MIT Vax, nor add NAC, you know, nor add Vax, logic on Vax. You put Vax because you were progressive, you were cutting edge and you had a machine that could do a million instructions per second. This changed everything. Oh yeah, and then there were these guys. So if you were old enough and you ported enough UNIX code, you may have heard the refrain, all the worlds of Vax. And for a decade, all of the UNIX code was written on Vaxes and then it was your job to port it off of Vax. And then a while later, it was all the worlds of Sun. And so you ended up having to deal with all of the Sunisms that people had buried into their C code. So we had this guy, DEC PDP-11, these are still in use. Fortunately, it's been reduced to a single chip, but it still lives on in a lot of embedded systems, especially onboard ships, where they either run the single chip version or they run PDP-11s in emulation for things like fire control, radar, because that's what they had. 16 bits, one to two mega-gram and 1,000 instructions per second. This was a one-kip machine. Oh yeah, and you got those really beautiful, expensive 176 megabyte drives. Some of the women in our department wouldn't do backups because they couldn't lift the disc packs. And you can tell it's the 80s, lovely colors, beautiful plumage. So why are the bits grouped in threes? Octol, none of that fancy heck stuff for us. So, memory test, anybody know what this is? Dating, core memory. Donuts on wires. That is 8,192 words of 18-bit RAM. On a card that's about 11 by 15, I think they were. Yeah, and if you take a close look at it, it's donuts on wires. So now you know why it's really called core. So, moving right on, the Vax, we loved it, 32 bits. If you had enough money, six to eight meg of RAM, and none of that core stuff, there was no core. This was the first CMOS-only machine that Deck did, and one million instructions per second. This was awesome, and it only cost a million dollars. The first Vax I bought, we bought it used, and I paid about $450,000 for it. It had sold new for a million dollars two years before. So, a million instructions, a million dollars. That's the way it worked. And oh, by the way, those washing machine drives there, we upgraded those, 300 meg. Programming languages. We still have the old classics. We've got PL-1, we've got Fortran, we've got Lisp, we've got Kobal. Don't knock Kobal, it probably still writes your paycheck. And by the way, Kobal, if somebody's doing financial calcs, I would prefer they do them in Kobal than in anything else, because they do BCD and they actually round properly. You should know that if you've done any computer science studies, but you probably didn't. So, or else you're pre-gaming DEF CON for the last three years. Pascal, Modula, Modula 2, those are gonna become important. Pascal was the first language designed to teach good programming, but it was a little short on features, hence Modula, Modula 2. Ada comes along. Don't knock Ada until you've tried it. Yes, it was designed by the DOD, actually it was designed by Honeywell, but having written about half a million lines of code in Ada, I would prefer to write in Ada than pretty much anything else on the screen. And then this is when Pearl and Tickle just begin to arrive. So, structured programming was the big thing. None of this fancy object-oriented crap. This was structured programming, because until this time, people were still really in love with the go-to. So, what kinds of tools did we have? We had stone knives and bear skins, and we were glad of it. We had ED, love it. Emacs and VI, well VI, where did VI come from? Berkeley, yeah, shows up right around 43840BSD from Bill Joy, but we're talking PDP11s here. No automated software testing, no frameworks, and make. The very, very, very first orally book I bought in 1984 was make. And I think that was one of three books they published then. So, things have gotten a little bit better. Terminals. So, anybody here ever seen or not, anybody here ever programmed on an ADM3A? You didn't miss a thing. The most un-ergonomic piece of crap there ever was, but it was the video terminal and universities and the government bought them by the bucket. You could pay extra and get green so that your eyes would rot out wide as fast. 24 lines of 80 characters. Why 80 characters? Punch guards. Now, Deck came out with something really, really novel. It was a terminal with a separate keyboard. And you could get an option on that that lets you flip back and forth between 80 columns and 132. Why 132? Nice try. Green bar line printers. Line printers were 132 characters so all the fancy green bar stuff. So you could flip back and forth for 80 columns so you'd see what your punch cards looked like or you could flip to 132 to see what your reports would look like when you print them. Okay. And then the ARPANET. 1969, two nodes. That's all you needed. So like I said, I showed up at Logicon the 3rd of January, 1983. Anybody know what happened on January 1st, 1983? The network, the ARPANET, took the first step towards being the internet because that's when they flipped from NCP to IP. That was the flag day. So go look in the Wikipedia entry for flag day and you will see why it's called the flag day. That was the first and last incompatible transition in internet protocols and it was the suck. There were some sites that were knocked offline and didn't come back for three months while they tried to get their IP stacks working. So 1983, 500 nodes. 1984, 1,000 nodes. 1986, 5,000 nodes. Nobody knows how many nodes today. This is from about 1995, this graphic right here, I think. So throughout this time, the deck tens, deck 20s mainframes are about the most popular. Vaxes, if you could afford them, lots of PDP-11s, a little bit of Multics, some Sigma-6, Sigma-7, Sigma-9 stuff. Anybody know what the backbone speed of the ARPANET was? 56K. And the modem was about half the size of this table and the power supply was the other half of the table because AT&T knew how to make stuff that was gonna survive a nuclear holocaust and tubes and the power supplies and all that kind of stuff. So now at home, you could, not at home, but if you were directly connected to a computer, you could get 1, 10, 300 or 1,200. At this point, the culture is still about secrecy. How do you become secure? You keep it all a secret. Almost all of the security research is being funded by the government, the US government, the UK government and other governments that I'm not gonna talk about. Universities didn't want to go anywhere near this. That's being part of the military industrial complex. We don't build bombs here. We don't do government research. I tried to start a PhD program at the University of California and I said, I want to do my PhD in computer security and they said, well, number one, we don't do that. And number two, we don't know what that is. And number three, even if you could find a graduate advisor who would, we won't let you do it because then the government will be here and they might direct your research in some way that might be evil. So things have changed a little bit. This is one of my favorites. At this time, we're still taking some software bugs and classifying them. So let's say that you have a system that's processing top secret information and you discover a bug in that operating system or that application. That bug is classified at system high for the system in which it was discovered. Problem is, you can't give those back to the manufacturers to fix it because all their programmers don't have clearances. So what you do is you go find another machine somewhere that is running unclassed, but the same OS or the same software and then you find the bug and then you slide that to the manufacturer, they fix it and how convenient. Someone else found the bug, now he can fix it. I saw that about five times. It was very disheartening. Computer security was a very small club. There were really two security conferences, nothing at all like today. The conference that I attended the most was NBS, National Bureau of Standards, who put it on and ran it for another agency that was nearby that also began with N. And this was the NBA conference. This was a really small conference in the 80s. It was about 200 people. Sometimes it was larger, sometimes it was smaller, but around 200 people that you kept seeing year to year. And a lot of them, I was the noob, a lot of them knew each other and had known each other for years. It was a very, very small community. Now, on the West Coast, there was this IEEE Security Conference, which was a lot more open and a lot more focused on business and that kind of thing. But the NBS one was very much a research gathering of people who were doing things for the government. Here's how much of a noob I was. So I gave you a presentation at NBS and I stepped off the stage in this very nice gentleman, beautiful suit, a bit of an Eastern European accent, walks up and he shakes my hand, hi, how are you? I really enjoyed your talk and all this. And we should get together sometime and for a beer or something, I was like, sure, why not? And he gives me a card, I don't even look at it, I just put it in my pocket. And then I walk away and one of the other attendee says, you know you have to file a contact report. A what? Yeah, a contact report. Why? Well, number one, he's a Soviet citizen. Number two, he's the cultural attaché. Number three, he's GRU. So when you get back home, don't forget to file a contact report. At least he gave you a card with the name that we'll know so we'll be able to match everything up and know who you talk to. So that was my introduction to that kind of security. The orange book, near and dear to my heart, this was the first time that people actually tried to sort out what it meant to be secure and how secure, or it's really more about assurance of correct operation. This was a way to classify, define levels, and if a machine met the requirements for a certain level it could process information at a given level. There was a matrix that said, if the highest level is top secret and the lowest level is secret then you can use A1 or B3 or whatever. So, but this was a way to try to actually build a taxonomy of how secure or how much assurance you have in correct operation. Published first in 1983, updated 85, completely obsolete by this point, but the things here are A1, B levels, C and D. So C was essentially discretionary protection. Your system had very little in the way of assurance protection but it had discretionary access controls. Whether those are access control lists or user group world, Unix in general fit into the C level. B levels, mandatory security, this meant that every piece of information in the system was labeled by the system when it was created with the mandatory label of like secret or top secret or confidential and maybe a set of compartments and that label would follow that information forever and then it would, the system would mandatorily, mandatorily. The system would compare the label of that data to your clearance and decide if you could see it. But you had, there was no discretion at all. The data was created at a level and it could only be accessed by things at that level or higher. Then at A1 we start talking about all those features are really cool but is it gonna work correctly? Is there any assurance that the design of the implementation is actually right? So the question is how do you get to A1? So are there any experts in predicate calculus in the room? We are gonna have a talk. I learned enough to do some work and that about broke my brain. So predicate calculus is one of the ways that you describe the formal system, formally describe the system and then you can create some theorems to see if it actually is, the design is correct. So at A1 we're talking about these formal methods, not just formal models but formal methods. Formal models describe the system, formal methods are how you create it. So what we're looking for here is we want to mathematically prove that there's a high assurance of correct operation. And the way you do this is you create, you describe the system using some kind of non-procedural language which generates, then you generate a bunch of theorems and then if you prove all the theorems, life is good. I don't prove theorems very well. As a matter of fact, I was an engineer, I am an engineer, not a mathematician and this was really, really hard. Fortunately, some other folks had done a lot of the work before I got there and I just had to prove, I had to re-prove some of it and prove some new pieces that we wrote. So in proving a design correct, this is kind of in a nutshell at the DEF CON level and what I remember, this is about what you do. So you describe the system's complete state. So that's all the global variables, all the static variables, all possible values of all those variables. This generates a really big finite state machine. Really big, space is big, you have no idea. Think about this, 16 bit mini machine, two to the 16th values, times the number of variables that you believe are global, comma, static. That's the size of your finite state machine, in theory. So then you derive theorems about all of the state transitions from across the nodes in the finite state machine and then you just prove the theorems. Interrupts make things interesting because those kinds of state transitions don't always start in the same bubble. So you're chugging through your finite state machine, suddenly there's an interrupt and you end up over here in this other state. So that makes finite state machines fun. This is when I learned to drink. So how do you describe this? You use special purpose languages like special and gypsy. The problem is you have insufficient computing resources or software technology to prove or you even describe really big systems. You have to keep it simple. So in a nutshell, the way we define this is I'm starting in a secure state. I have defined that I am in a secure state. And then you define theorems that lead you to all of the other possible states. And as long as you can prove that all, so you start in a defined state and then you, all state transitions are proven to not lead you into a not secure state. Yeah, right? It's so, I mean, it's like three steps, right? You know, one, two, profit. But what does it mean to be in a secure state? And this is where two gentlemen, Bella Padula, they did some really cool stuff. And they created this thing called the Bella Padula model. And this was really the first formal description of security in terms of computing. So they said, we're gonna make this simple. There are objects. Anything that stores data is an object, whether it's a file or a byte or anything. It's an object. And then there are subjects, which are programs or processes. Those are the active things that cause objects to be accessed and changed and modified. And then we're gonna describe with a couple of rules how they interact. Okay, pretty cool. So these are the two primary properties of the Bella Padula model. Simple security, a subject cannot read from an object at a higher level. That basically says that if I have a secret clearance, I can't go into the file cabinet that has top secret stuff and take it out. I may not access things at a higher level. However, let's say that I've got access to something at secret level. Why can't I just give it to my friend who has a confidential clearance? That's the star property. They couldn't come up with a name for it. They didn't know what to call it, so they just said start. Well, fair what to call it later. It's been the star property ever since. So the star property basically said you can't give away information that you have access to, to someone who doesn't. There's also some, you know, some other stuff like transitive star property and a bunch of other complications. But these are really the two big ones. This looks really simple, right? I can't see things I'm not supposed to. I can't give away things that I own or that I have access to to someone else. That's a very easy description. Go read the paper. Trying to describe that mathematically gets interesting. But this is all about comparing labels. Labels are what is on a document. Labels are what is in on an object. Labels are correspond to a clearance. It's really easy to say top secret greater than secret. That's easy. Secret greater than confidential. That's easy. What happens when you add these caveats, code words, and handling designations? How do you compare top secret plus no form plus nightmare green plus SR 71? If those even exist. Because if nightmare green exists, I'm worried. Oh, you've read the book. How do you compare that to top secret plus nuke sub plus blue crypto? Is blue crypto greater than SR 71? Is it less? How do you compare these labels? Well, since I did used to teach computer security for living, I'm going to invoke my professorial prerogative and say the exercise has left as an exercise for the student. I would like a proof in formal mathematical proof and show your work. So proving all this is complicated. The design and proof of process was so onerous that the only way out was to create these extremely small operating systems. And this is where we end up with security kernels. So at the same time this is going on, we've got high assurance high assurance systems. Remember, we're not talking about secure systems. We're talking about high assurance of correct operation. Not secure, but high assurance of correct operation. So we've got the message flow modulator going on Dr. Don Good, the guys at UT. They did a dynamite job. That is, right, 1200 lines of formal specs for 600 lines of code. And then it took PhDs and Boyer Moore theorem prover on a deck 20 to prove it all. But they actually proved a 600 line program to be provably correct. 600 lines, how many months? Not exactly practical. Then you also got Multix, which has been around for a while and Multix is in use at all kinds of places like NSA and AFDFC. So it's cranking right along and it was evaluated at B3. And then you've got some research stuff going on. IBM was doing this thing called KVM 370 and then we and Ford Aerospace and Honeywell were doing things called Ksauce. So the Ksauce story, which is what I really, I told you all of that so I could tell you all of this. Ksauce was a kernelized secure operating system. The DOD and the DOD wanted a secure system, but they wanted a mini computer because they only cost hundreds of thousands of dollars or only a million dollars as opposed to tens of millions of dollars. And like the government often does when they want to mitigate their risk, they have two parallel programs. So they did Ksauce 6, which they did, gave to Honeywell, which was a modified level six with some extra boards that made it look like Multix. And then Ksauce 11, which was we're going to, fuck it, we're going to do it in software. So that was the goal there was to run a secure operating system on general purpose hardware by using good software engineering techniques. So Ksauce 11 was born at Ford Aerospace. What do the risk digest and TCPIP congestion control have to do with Ksauce 11? Anybody here read the risk digest? One person. Oh, okay. Who runs it? Peter Newman, right? He was one of the authors of Ksauce 11. Who invented the, who do we know most for TCPIP congestion control? Nagel Algorithm. John Nagel was also one of the developers of Ksauce 11. So when this was transferred to Logicon, I ended up having to wade through all this code. To this very day, it is some of the most beautiful and elegant source code I have ever seen in my life. I don't know who those guys were. Well, actually I do know that some of them, but it is some of the most awesome, beautiful code I have ever seen in my life and it is what we should all aspire to. Now, I'll come to that. I haven't figured out where to send the Freedom of Information Act request. I have the contract number and I may or may not know where a copy could be found, but until I, well, I'll come to that. So at Ford Aerospace, I think the team size was like eight to 15 off and on. When it transferred to Logicon, we had a contract to build guards to sit between networks at different security levels. And they said, oh, by the way, you have this operating system, you should use it. We looked at it and it was like, cool. This is really good stuff, we're gonna use it. And we actually deployed it in operational systems. In a couple of cases, they were between NSA systems and Army systems or NSA and Navy systems. These were processing intelligence information that's being gathered and then doing a correlation. And CASOS was the thing that sat in the middle. This, I cannot say enough about. Shannon's Maxim, the way we used it was, secure even if the attacker has all the source code. No security through obscurity, actually it has the source code and all the design documents. This is really big. Until we started talking about this, remember that whole secrecy equals security thing? If you wanted to build a secure system or be involved in security at all, you had to keep it a secret. Here was source code for an A1 kernel that wasn't classified. Oh, think about that in terms of open source. So the development technology, what did we have to build this stuff out of? Well, we had PDP-11s, we had Linux programmers workbench, which is what you had before version seven. If anyone knows what Tyco is, we got our, I'm gonna talk to you after because you're nodding too much. We got our version from Tyco and we had 24 by AD, ADM terminals at 9,600 and we had ED, wrote all the docs, all the formal specs, all the code, we did all that in ED. We had no make, but we had shell, like born shell, not bash, but the only shell that existed at the time, born shell, BNSH, and we had a modular compiler. Yeah, about that modular compiler. Anybody recognize the gentleman in the upper right hand there? Give you a hint, he also invented Pascal. Say again? Thank you, yes, that class worked. This is an experimental compiler from Zurich. Here's a compiler. It is good, you should use it. There's no support and don't tell anybody you're using it. Wait, what? It's modular, so it's like Pascal only better. It's like Pascal with modules. However, this was a toy compiler that they had built to prove that modular was a viable language. So the entire program had to be submitted in a single file. No forward references, which meant that if you needed the use of a procedure down at line 20,000, you had to define that somewhere earlier in the stack so you could read the dependencies of all your processes and procedures from the bottom to the top, at the bottom was main and everything else that you needed was above it. Limit of about 32,000 lines. If you tried to give this compiler anything more than about 32,000 lines, it would just barf out. And so you, what are we gonna do? How are we gonna do an operating system in 32,000 lines? Well, we did. And actually Ford Aerospace did. And then we took advantage of it. 41 system calls. Modern Linux has 300 and what? But it turns out with these 41 calls, you can implement all of the others on top of it. But remember that whole thing about single file compilation? Well, we used find and cad and grep and a whole bunch of other stuff to take all of the source code modules, stitch them together in the right order, strip out the comments, and then submit it to the compiler. And why do we have to strip out the comments? Because we had 33,327 lines of source code and the compiler would only take 32,000. So you can ask yourself, how come I know there's 33,327 lines of modular code? So here are the primitives with these 41 primitives you can't implement pretty much any operating system except this doesn't, well yeah, we even did it PCP but not in the kernel. So that's really all you need. So provably correct design, 45,000 lines of special which is a non-procedural description language. That was turned into several thousand theorems. You feed those to the Boyer Moore theorem prover on the deck 20 at UT and you pray that it solves the theorems that you'll never be able to. And then whatever's left you have to work your way through you have to lead and hint the theorem prover to prove the theorems. You can give it hints, you can create lemmas and all that stuff. And then that leads to proofs of all the theorems. I only had to re-prove the theorems and then we had added two kernel calls. So I only had to re-prove the theorems and prove the two new kernel calls and that took me months because I'm not a PhD and this was kind of new. Now Ksauce did a few things that were kind of cool. We kind of anticipated the future, object stores are the big hot thing now, right? S3 object stores, all you've got is a file handle and that's all you need, that's all Ksauce had. We couldn't get, if we had put a hierarchical file system in, we would have had a lot more work to do in proving, and it would generate a lot more theorems. So we just said we're gonna do an object store, there's file handles, there's file hashes, that's your handle, don't lose it, because if you do, you'll never see that file again. So it's very much like S3. We did build a user space file system, kind of the way that the fast file system and many file systems for Unix have been built since. You build them and you debug them in user space and then you shoehorn them into the kernel. We couldn't do that because we were running up against the limits of RAM, because you only have 64K instruction, 64K data, and a PDP 11, and then on the Vax we were okay. But we also did the TCP IP stack in user space. So if you had a Ksauce system that was sitting between three networks of different classification levels, you had three processes, each one attached to a network and each of those processes ran at the same security level as the network it was attached to. So this meant, this was a way of classifying the network traffic based on, you just classified it based on the label of the process it was running. So Unix compatibility became important. We did a V7 emulator, 110 calls, that fit in 48K. So people tell me that they don't have enough RAM on your machines today, bite me. Real mint, yeah, okay there. So Ksauce 11 was actually deployed in operational systems. In many cases, system low was top secret and system high was TS plus compartments and things. So there's nothing above top secret, there's things that go with top secret. And then the government said, wow, wouldn't it be really cool if we put this on a Vax? And we said, you betcha, buy us a Vax please. And they said, yeah, how about used? I said, sure, I'll take that. And so it bought a Vax. So in 1985, we undertook our first completely unique Ksauce work, building on what had been done at Ford Aerospace and this was to take what was in Ksauce 11 and translate it to Ksauce 32 in such a way that we would not lose all of the hard work we'd done for theorem proving. So, six mega-bram, this is a two user machine, never two of us, so it was awesome. Two guys in a Vax, right? We started Berkeley, BSD42, thank you for EMAX, thank you for VI, thank you, thank you. We improved the tool chain. So the tool chain, we had a modular two compiler. This one wasn't experimental, this one actually had support. And we had the same, this was essentially a straight port. It had the same 41 system calls. The source code count is a little bit higher because we actually did additional device drivers for the Vax, much faster development. Two guys cranking on code and also we had this funky build process. So we had to maintain compatibility or we had to maintain commonality with the Ksauce 11 source code as much as we could because we didn't want to lose all of the theorem proofs that have been done and have to reprove them or rewrite them. So here's how you started the build for the Vax. First you check out the Ksauce 11 PDP11 source code and then you pass it through a bunch of EMAX macros. These EMAX macros were a modular to modular to cross compiler. Anybody who has anything bad to say about EMAX? This was an EMAX, this was a modular to modular to translator built in EMAX. So, yeah, take that VI. Then you replace, and then once you've done this translation of everything, then you take out the PDP11 modules and you throw in the Vax modules, right? So now you've got a bunch of modular to code and you've got in the new process system that we built, the new memory management that we built, the new device drivers that we built and then you compile that into assembly code. Why do you compile that into assembly code? Because you need to do the said hack, which I'll come back to in a minute. Think about this, how do you get a compiler to emit instructions like load process context or manipulate this specific machine register? Modular to high level languages don't have those instructions as statements. So you have to find some way to emit the specific weird instructions you need to build an operating system. So you apply the said hack, then you assemble it and then you assemble it into a new kernel and then you use DD to put it on the case off system pack and then you reboot your Vax. This isn't, you don't do this more than once or twice a day, because this is also your only dev machine, right? So you develop in the morning until everybody's happy and then you go, run the compiler, run the assembly, run everything and then you go to lunch, then you come back and then you do DD and then you reboot and then you say, fuck and then you start fixing bugs and then you ladder and it's repeat. So the said hack was one that I came up with and I have to admit that there was alcohol involved. So what you do is you, in modular you make dummy descriptions of all of the instructions that you want. So the load process context instruction, for example, you make a function that has all of those arguments so that you can then call this dummy function from modular. Cool, right? So I just described it, no big deal. Well what you do is then when you run the compiler it emits push, push, push, argument, argument, argument and then it does a system or does a call to the function. So what the said hack does is it goes and finds all of those instruction sequences that are getting set up to call the dummy functions and then you edit that into the exact instruction that you want and then you assemble it. So, and then you run the assembly on the hacked up code. So all this happened, we got to first user login. I actually logged in on the console as admin. Password was not admin. And then the project was canceled. So, yeah I know. Well the government had been sold this bill of goods by some of the manufacturers. You know, you guys are spending a lot of money on research with universities and all those stupid defense contractors. We're computer people. We build operating systems for a living. Give us that money and we'll give you commercial off the shelf A1 and B3 operating systems. Trust us. Anybody bought an A1 or B3 operating system? No one ever did. Security of EMS was really, really good. Absolutely no doubt. Security of EMS was a real good try. And I believe that it may, I think that it was actually entered into the evaluation. But this just killed SCOMP, KSOS, KVM370, KVM360 and a host of other research stuff that was really getting good results. Because the government was gonna just start writing a check and buying this stuff. So, wanna wrap up with what does all this mean? Why should you care about something that was done 30 years ago, 10 years before DEF CON even started? Well, we got a lot of stuff wrong. I mean, we really did. We were being told by the mathematicians that if you write all of these formal methods and everything, you're gonna come up with a provably secure design and you're gonna be able to implement that and you're gonna be able to prove that the code is correct and everybody's gonna love you because your machines are gonna be secure. Well, not so much because engineered and claims of well-designed turned out to be good enough for the market. By the way, how's that working out for us? Last time, I looked not so good. No one was willing to pay for this high assurance. The stuff, too expensive, too simplistic, didn't have enough features and besides we got DOS, we got Unix, we got this Mac thing and they've got all kinds of features and all kinds of interface stuff. So, what do we need with all this security stuff anyway? It just gets in the way. There was also a lot of work in the 70s and 80s on better programming languages. People actually looked at the code that people were writing and said, wow, it's amazing that you can get anything done because these programming languages are crap and if you use these features, they lead you to write crap code. Your languages are part of the problem. They make it more difficult for you to write good code. Let's solve that with new programming languages and then came C. How's that working out for us? But we did get a few things right. So, and the thing, this is the money slide. These are the two money slides. Number one, software quality equals system security. Period, full stop. The thing that goes with that is strong configuration management. But if your code is crap, your security is crap. If you don't have decent software quality, you don't have decent security. And take a look at the CVE and the CWE and all of those and you will see that the programming languages we use make it difficult to write good code. We are not so good at the software quality thing even though there are people who actually know to help us be good, but we usually tell them that we're too busy. But software quality equals system security. Crypto is easy, key management is hard. That's pretty obvious. We did get right the importance of logging and audit, even though the mathematicians were telling us you don't need that because your systems are gonna be perfect so no one will ever get in. The little boys in our head are saying, yeah, you know, it should be good to know what happens when they do get in. So, lots of logging, lots of analysis. And this led it into intrusion detection. So, a lot of the intrusion detection research, you should read a book by Becky Bates called Intrusion Detection. She literally wrote the book on ID because she funded so much of it when she worked for the agency. The IDS systems also led, because of the PC explosion, we didn't really do IDS systems in a big way. We did antivirus instead. But a lot of the, you know, you've got signature based versus procedural and anomaly detection. That's why they're like twin brothers. So, to wrap up, please don't repeat our mistakes. Just because we're old and gray doesn't mean that we're stupid. And we weren't stupid. We were just a little naive when we were your age. There's some really good stuff out there and all the best stuff is not on the first page of Google Hits. So you're gonna have to dig a little bit deeper and some of the stuff is gonna be on dead trees. But go find the proceedings of IEEE and NBS and all of that stuff and go find the research papers that are coming out of Stanford and MIT and UC and all that. Computer security is more important than ever and it's too important to be left in the hands of corporations that care about their bottom line, more than they care about your privacy or governments who don't care about your privacy at all. And you are the counterweight to crap code and crap OSs and crap security. And look at the systemic problems. If you're gonna go fix a bug in a piece of code, don't just go fix that bug, but then go find why that bug was made and then go find all the other bugs that were made for the same reason the same person because of the language they used. Finally, I'd like to thank some folks. I'd like to thank DEF CON for inviting me back. I've been out of the business for about nine years and thank you for being polite and giving me a break because I haven't done this for about nine years. I'd like to thank my Multics Mentors and my case partners in crime, Jeff, who also was an editor of the Orange Book. Sid, T.S. and Drew from SDSC who helped me with an unpleasantness a few years ago. Terry Amanda Todd and my other bureau friends and then Loc Mack, Half Dime, Ian Sven and Hockey Geek at my current employer and then my family for letting me come to Vegas and talk to you all. So thank you.