 The leash I have today is only, actually, I go all the way off the edge of the stage, so it's enough rope to hang myself. That's good. All right. I think we're going to go ahead and try to get started, modulo two hours, but I think we're ready to go. Let's see. Lots of ground rules. I'm Bruce Potter. I'll have an intro slide later. It's obligatory. You have to do it so people know who the random guy on the stage is. I gave this talk at Black Hat basically the same talk a couple of days ago. So if anyone here was at Black Hat, I apologize now, and you can go see another talk because this will largely be me just screaming and ranting like before and not saying anything of value. The one thing that was, I think, kind of shocking to me is that this talk should have, well, it was received very well, and it shouldn't have been. I should have been lynched. Last year I gave a talk at DEF CON that was about why you should use Windows if you care about security rather than using Linux, and I lived. So this year I've tackled something that I hope to be equally as confrontational and prove that I can, indeed, at least drink the crappy beer and live to tell about it. So I was going to dress up as church lady, but when I started to pass that idea around to people and say, could it be Satan? And people didn't know who the church lady was. They thought I was a little crazy. So Dana Carvey and Church Lady, I mean, I assume most of you hands that know who the church... All right, so clearly I should have dressed up as church lady, but it's your loss, so I apologize. Black and white ball on the other hand. You'll see me dressed up like church lady on Saturday. Other odds and ends, not really. First off, don't believe anything I'm about to tell you. It's my obligatory slide. For those that know, where did I get this quote from? Liquor store. I heard it. Someone said liquor store. Wow, someone's actually seen this before. I was at a liquor store buying some $5 whiskey, and there was Buddha staring at me, rubbed his belly, and he told me this. It's interesting. Defconn and Black Hat and conferences like that, and listen to people talk, and they would say the most random crap that wasn't technologically sound, wasn't based in reality. And I think it was largely due to the fact that we don't have a lot of discipline in what we do in this particular community. We kind of shoot from the hip and say, look at the cool stuff we can do. Note, I didn't say shit. I'm trying to avoid swearing, because some GCN reporter actually quoted me saying, God damn it, and my Black Hat talk the other day. In the same sentence as my employer was used. So there'll be no F-bombs at all, none, zero. Anyway, so there used to be this problem with people kind of saying random stuff that wasn't necessarily true. We have a different problem now. We have a problem where the security community is very big. There's a lot of money in it, turns out. There's a lot of money in offensive and defensive capabilities. We used to have this dialogue about full disclosure, and it was this huge naval contemplation of, do you tell the vendor now, or do you inform the users first, or do you tell everybody at once, or what do you do? And Microsoft and AtStake and all these people got together and formed this thing called the Organization for Internet Safety. They drafted this thing that was a counterpoint to the RF policy about full disclosure, and everyone debated this, like it was profound and interesting and whatever. Couple years later, it turns out that this is the wrong argument to have, because now we're all selling our exploits. We're selling them to companies, know nothing against 3Com and the other companies that are out there that are buying the exploits, but we have a different debate on our hands. We're not exercising full disclosure in the same way that we used to, and that really can change the motivation of the people that you see up here on stage. I have my motivations. I'm biased. Clearly, I'm giving a talk pro-trust and computing at DEF CON. I clearly have a biased and potentially chemical imbalance, but we really have to pay attention. Who's on stage and why are they telling me this? Why are they ranting about these things? And you've got to ask yourself, when you see all these talks, what's that person trying to do? What's he trying to accomplish? So I encourage you, use your brain. Be cynical, right? We're cynical about code. We're cynical about firewall rule sets. We're cynical about routing protocols. When it comes to idiots on stage, we believe them like mice. So just think about it. Throw crap at me. Disagree. I encourage you to be as confrontational as possible within reason. Simple's up here to defend me if it gets to. Thank you. Thank you. There. I'll move along. Actually, real quick, anyone know what a Volvo 1800 is? All right. I have a 1972 Volvo 1800 on blocks. It's in my garage, though. It's not in my yard, so it's not quite as redneck. Got a big hole on the floorboard. And it's the sweetest car I was restoring. So anyway, two things to accomplish today and what's becoming a shorter amount of time. One, I'm going to make the case of trusted computing, hopefully. Or at least I'll try to make the case of why there's good in this whole world of evil that people seem to see on it. I'll also demonstrate TPM running on this MacBook Pro right here, and we'll release some code, and it'll be all wonderful. Sprinkle in some good arguments, some beer, et cetera. Here's my anecdotal history of what we've accomplished in the last 40 years of InfoSec. It's not complicated. We've been solving problems, and we've been doing a reasonable job at it, but the problem keeps getting more complicated. How many people in here review code for a living? Handful of people. Got a few people who review code. Is it easy? Is it easy to find bugs? Yeah, it's easy to find bugs. OK. This is not my cell phone. That is ringing. Hello? Oh, Richard? No, this is not Richard. Actually, this is Richard's cell phone. Hold on one second. And there you go. I thought I was going to take the red pill. Anyway, so the problem keeps getting more complex. Reviewing code is hard. Reviewing code in an embedded system that we could have designed 15 years ago is a hard problem. Reviewing code in an enterprise service oriented architecture is currently an impossible problem. We can't write secure device drivers, as was demonstrated by who was Johnny Cash and David Maynard or Black Hat, but let alone at an enterprise level understand what security looks like in these massively complex systems. But yet, that's the buzzword. Go buy computer world or go buy some current contemporary magazine, and this is what we're talking about. We're talking about solving big enterprise security problems. We don't have our hands on the stuff that's 15 years old. So we have a sustainable, profitable industry where we're chasing an impossible dream. This is fantastic. That means we'll all get paid very well. Now, while that's all well and good, it's not actually making anything better. These lines should, at some point, intersect or at least get closer, but they're not really. This is founded in data. I have thousands of data points to back up these two lines if you want to see it later, so. Infosec trends, defense in depth. So the core problem is currently unsolvable. We can't build provably secure systems. Oh, there's a dirty word, provably secure systems. So let's throw a bunch of band-aids on it. Let's go ahead and we'll slap on firewalls and IDSs and IPSs and execution control mechanisms and all kinds of stuff and hope that we always stay in front of the Wiley hacker. Because the Wiley hacker apparently is an idiot and doesn't learn, right? Wrong, okay? So with phrases like defense in depth, we sound like we're being responsible. Sure, it's the best practice. It's what we got going today. This is the best idea we got, but we need to think about how we're gonna solve these core problems that we're facing in Infosec. Things like, if you access the system directly, you win, okay? Physical access to a system with a backtrack boot disk wins the day, period, right? There are very few situations where that doesn't get you everything you need. And even Hikari and Kuru have gone and malicious slave devices plugged into things, auto-rooting stuff, you know? If you can touch the device, poof, there you go. What else is there? And then transactions are trusted at a network level. End-to-end device-to-device security only really exists in controlled and highly, well, controlled, I couldn't think of another word, failed miserably, highly controlled environments. So this is kind of the situation today. I'm being really cynical today, I apologize, but this is the third time I've talked about this in like a week, so I'm kind of down on it. How do we get here? It's a paper from 1971. Butler Lampson, he wrote a paper called Protection. He actually wrote a lot of papers around this period. I'll give you a couple seconds to read through this. Can you read it in the back? Nods, good. So, you know, is this any less true today? Is this any less true of what we're trying to do with System Stay, be it your cell phone or be it your enterprise web server or your SOA gateway? This is the same damn problem, right? His paper goes on to describe in a nutshell, multi-level security, MLS, basically object-level separation enforced by the kernel, keeping things really secure and, you know, the idea is you can keep multiple classifications of data away from each other in a military environment, but in general, you can keep the good data from the bad data, the need for hardware security to enforce this data separation and this object-based access control. 1971, okay? 2006, for reference, in case your calendar broke, we haven't gotten real far. Hey, it's a blank slide. No, it's PowerPoint foo. Ha, ha, ha, this is all as good as it's gonna get, folks. I'll give you a few minutes to read these quotes. All right, any guesses when this was written? Yesterday. Yesterday, ha, ha, another guess. 70, someone had saw the talk, 1972, actually. 68, right, it was a long time ago, again. This was from a computer security technology planning study, October 1972, published by the Air Force. Okay, hey, smells like the same thing we're doing today. Right, if you go through this paper and you redact all the specific systems and networks and random network topologies that I've never heard of, it's exactly the same problem. Wow, we've done a fantastic job in building this industry. All right, so, I'm not gonna be employed by the end of this. The road is littered with corpses. MLS, people have been searching for MLS trying to do this for years. Turns out it's really hard, right? We try to do provable, secure operating systems with formal methods and all kinds of stuff and that hurts. It hurts a lot, okay? In complex operating systems, when you try to build one, the formal methods and everything go out the window the moment the fingers touch the keyboard. Soon as a developer and a human gets involved, it's no longer provably secured or reasonable scale. I hate to say it, but we're not arming our developers the best that we could. It's like going off into the streets in some inner city and having a Nerf gun and walking up to people and smacking them on the head and picking a fight. It's not the greatest idea, but that's about what our developers are armed like these days. They don't learn security when they're in college. They might be able to go to a conference and learn how to audit code and maybe learn how to write some good code. That's not a pervasive thing. There's not armies of millions of developers out there who can write uber secure code. This is a problem. This is something that academics are have to address and they're getting better, but operating systems are complicated. Software developers don't have the tools, don't have the knowledge, don't have the time to write secure code. So the best you can do in that situation is layer on other protection mechanisms that developers aren't involved with like firewalls and execution protection like what's the thing, GRSEC, no, the Amy Nicks. Stack guard, thank you. All right, trucking right along because my memory is failing me. All right, fast forward 2000. DRM emerges on the scene. Woohoo, man, everyone was making money. It was 1999 and everybody was getting rich and then it crashed. DRM stayed around those, turns out. Content was king. AOL Time Warner was going to be the biggest, baddest corporation in the world. Didn't work out so well. DRM though is a key part of how the content evolution, revolution kind of came about. Content providers had a mode where they were used to selling their stuff and getting money for it and as much as Napster and crew were allowing you to download it for free, there had to be a sustaining business model that was developed and digital rights management fit it pretty well. Microsoft even has DRM like capability in office now where you can control where files get sent. It's not perfect, right? Like DVD John, he proved that DVD encryption not perfect. Not rocket science, there was a static key. You get the key, you can rip down DVDs. How many people have ripped DVDs on a PC? It sucks, doesn't it? It really hurts. How many people have ripped DVDs on a Mac? Is it easy? Yeah. Handbrake, baby. There's a great program out there called Handbrake. You hit one button. I was offended when I tried to do it on a PC how bad it was. Like you download 15 different little codecs and widgets and then nothing works together. Blue screens of death that I hadn't seen in forever pop it into my Mac when I was like, oh, there it is, oh, and I have two cores so I can do anything else while we're doing this too because I'm badass. I might not be a Mac fan, but I probably am. So DRM can be subverted. If I really interest it in overcoming the copy protection mechanisms in an office document, I'm gonna just type it into another document, right? It's not gonna keep the evil people from doing evil stuff. If I wanna take an MP3 or an AAC file that is protected by DRM, I'm just gonna transcode it in MP3, right? It's not rocket science. There's nothing stopping me from doing this. DRM's gonna keep legitimate people legitimate. So we all hate it, right? It's all evil, lose control of my content, blah, blah, blah, blah, blah. Guess what? It's cool. Go ask any college student. Apparently iPods are cooler than beer. This is not the college I went to. I don't know where they took this question from. But anyway, it's all just mind numbing to me. So ultimately, there's some transitivity here, so let's follow the path. iPod, one of the big things that made iPod successful is the existence of the iTunes Music Store. iTunes Music Store could exist because there's a DRM mechanism that Apple came up with that was tenable to the content provider so the content providers could put content on iTunes Music Store. Apple could sell it for a buck. You could buy it. You could get the Britney Spears song you haven't heard in 15 years and download it to your iPod and no one had to know because you didn't even have to go into a store. It's like buying porn, you know? It's a dirty little secret. I buy porn and Britney Spears albums online. February 2006, one billion songs sold on the iTunes Music Store. Now, Budweiser, they sell a lot of beer. One billion songs sold on that thing, that's actually a lot of songs that officially probably puts them in the realm of cool. So, transitivity, Apple made DRM cool. Nobody even realized it. It was sexy, it was useful, it was something the users wanted and they put up with it, you know? If you had an argument on the face of it of, hey, it violates your privacy and whatever, people would probably get upset about it. When you say, hey, but it's a dollar, oh, your eyes get real big and they just kind of wander off into space. Like, ooh, a dollar. So, it, it, it, it, okay, trusted hardware. What does this have to do with Apple? So, Apple's doing the same thing with trusted hardware. I hate to tell you this, but they're gonna, they're gonna, they're gonna have the same effect. MacBook Pro runs on an Intel, okay? Apple, six, seven years ago, decided they're gonna get away from allowing people to sell cloned Apple products and so they got another, they got Stranglehold back on their, on their hardware platform again and the models worked out well for them. Apple's growing, you know, their stock's growing great, they're making a lot of money. IBM relationship isn't working out so well, so they moved to Intel and Apple wants to maintain this kind of, not Stranglehold, but certainly good, strong grip on the hardware market around their software. Knowing, though, that they're walking into the x86 market, which how many people in the room own an x86-based PC? All right, good. I'm just so clear. I thought there might be a few of us. You know, that's a bit of a threat because OS X is actually kind of cool. I mean, all things aside, like it's a Unix that runs Word, all right? So you can go own a box and write a letter to your mom and fax it all at the same time and you don't have to use open office and play around with print drivers. It all just freaking works, fantastic. So when they made the switch, you know, there were certain bits of proprietary things they wanted to protect. I need to caveat with this by saying, this is my crystal ball. I've shaken it up, I've looked into it and this is what it looks like to me. I don't have any inside information from Apple. This is what I've read on the net and I've been able to synthesize based on looking at these machines. But what they want to do is keep this grip on this hardware. So one thing that was really interesting when they made the migration is they have, I don't know, 20 years of legacy PowerPC code, actually, I guess 10, 10 years of legacy PowerPC code and they want to be able to run that on the X86 platform. So they could do the nonsense they did with classic, which is where you instantiate a whole virtual machine, suck up a bunch of memory and you run slower than death or you can actually just do the translation in real time. So I think they actually acquired a company, they've named the product Rosetta and it does real-time translations of PPC op codes to X86 op codes. Neat, this is their differentiator. This allows them to seamless transition to the Intel platform. This is the little bit of thing that they don't want running on every PC so that people just don't start running OS 10. So what they do is they put a TPM in this thing and then when Rosetta instantiates itself, it checks to see, hey, is this really a MacBook? And if it is, okay, it'll run. If it doesn't, you can't run your PowerPC stuff. Pretty neat. There's a Russian hacker by the name of Maxis who repeatedly bypassed this protection. Apple lawyers are repeatedly beating him about the head and face. I challenge you to go find it right now at Google. I'll give you 10 minutes. I'll give you a t-shirt if you can find me a Maxis patch. They're really hard to find right now. They're really hard to find. And all he was really doing though was going into the binary and sniping out the calls that made the call to TPM and just said, yep, everything's fine. And the thing would start. Software. So the problem was, clearly there's architectural problem there. It was still subvertable by software, but there's long-term solutions that allow, and I'll get into it in a minute, that would actually make that very difficult and not impossible to bypass. So backing up a step. I used an acronym on the previous page that I didn't explain, which in consulting, foo is bad, I'm not having a good logical step. So backing up a step, because I didn't feel like rewriting the slide. Trusted computing group used to be the TCPA. This is an industry group. It's not the ITF. This is not like an open forum that you can just go to Zurich and hang out for a week and talk about the direction this thing is going. You have to buy your way into this thing. They're setting up standards for trusted computing systems and architectures. Used to be solely focused on the development of standards around the trusted computing module, a chip that gets embedded on the motherboard. Now they branched out. They realized just building the chip isn't gonna make anything happen. So what we need to do is focus on, hey, let's talk about infrastructure, mobile devices, PC clients, servers, trusted network connects, storage, software stack, how you access this thing. That's kind of important. And tie it all together. So there's all these documents and there's all these specifications that you can get to try to help you build these systems. That's what the trusted computing group is. If you read their charter and no point doesn't say be evil, be the counterpoint to Google. Don't be evil, be evil. Ha ha, that was funny in my head. So anyway, thank you, thank you. Try to be evil, I'm done, all right. So it strikes me that people think this is inherently evil. And I think the ultimate anecdote for that is the really angry doll on the Simpsons is trying to kill Homer and the guy comes over and is like, oh, you have it set to evil, click. And oh, it serves him food and everything's fine. This thing doesn't have that property one way or the other, all right. It's gonna be what we make of it. So let's talk about privacy for a minute. First of all, I need to tell you up front, full disclosure. I have fully bought into the church of McNeely that says you have no privacy, get over it, okay. McNeely said that many moons ago at some Sun event and I believe him to be fully correct. I know other people in this audience disagree with me vehemently and that's cool. So let's talk about privacy a little bit. First of all, previous versions of the TCG or the TPM spec actually were pretty bad when it came to privacy stuff and would allow for the content providers to really grab hold of your machine and shake it vigorously without allowing you to do anything. And there was a lot of pushback, understandably. So there were also a lot of pushback to things. Remember the Intel serial number issue? You know, when they put serial numbers on the chips. You know, there was no end user benefit there. That wasn't sexy and cool, right? You know, again, DRM, iPod, sexy, cool, serial number on the Pentium profit. There was nothing in the middle, right? It just, it was bad. So the new specification allows for a couple of things. One, the owner controls personalization. Note, owner, not administrator, not the guy in front of the computer, but the owner, okay? In an enterprise, when you're using a laptop given to you by the company that you work for, you have no expectation of privacy on it, period. I'm sorry, I know that there's laws that allow workplace privacy and things like that. But by and large, you should not assume that anything you do in that box is subject to your own little private whims. It can all be controlled and monitored by your employer. So the owner in that case is probably the company. When you buy your own laptop and take it home, you're the owner. You can do anything you want with it, all right? You can take over the chip, you can turn it off, you can turn it on, you can reset it, you can do anything you want to. The private personalization information is never exposed and the keys are encrypted. I mean, I'll get into all this kind of nonsense. In a nutshell, there are keys that can be bound to the TPM and there are keys that you can export off the TPM. The original spec just had keys that were bound to it, which is pretty evil because then if your box dies, all the data that was encrypted with that chip is gone. On the flip side, nothing really got ever made with that particular specification so you don't need to worry about it much. Anyway, that was a rambling, horrible slide. I'm gonna shoot it. Trusted platform module. Chips are manufactured by a bunch of folks. At a high level, they got three things they can do. One, assured cryptographic operations. When I say assured, it means someone looked at their implementation and said, yeah, this thing's okay. Not like it's Uber, cool or whatever. It's just gonna do the same thing every time. You can't change it, right? You can't download a new library version and break it. It's just gonna do RSA correctly every time. It serves as a trusted key store. It's on a little chip. It's inside the chip without a lot of high-end gear. You're not gonna be able to extract it. It'll also make an integrity statement. It's basically say, I'm okay. Everything's A-okay or at least it's not okay but it'll cryptographically sign those messages and say that it's not okay. The TPM on its own doesn't do anything. It sits there. When you move the box, it's gonna hang out. It doesn't set the evil flag. Doesn't do IP over avian carrier. All it does is sit there. I boot this Mac running Linux right now. TPM wasn't even activated. Didn't get in the way of anything. Higher level systems must use the TPM to do something, right? They can't just expect like it's gonna intermediate. It again, it is not magical. There's no magical property about this. Magic's not real. That's what my son says to me. I'm eight years old, sorry. How'd you do it? I said it's magic. Magic's not real. He's disenfranchised at eight already. It's great. The TPM spec says the user must have the ability to turn off the turn of the TPM chip that's off O F F. The TPM chip, that means the user always has control of the device. Again though, if you have software that needs it, that means the software might not work. Okay, there's a trade off here. Who in the what now? Like the operating system, like particularly. So let's talk about that. There's a lot more but I've paid some hecklers in the room, particularly ones that look like pirates and truly, there may be things that you wanna use that if you don't want them to access the TPM in your system, they're not gonna work. Okay, now here's the thing. One user isn't important. Same thing I tell people when I talk about wireless security. Nobody cares about you as an individual. They care about you as a group of people. They care about getting 100 million credit card numbers from a VA laptop. They don't care about getting one credit card number. You find the identity theft that's sitting around sniffing someone's home network looking for credit card information. I'll show you the dumbest man in the world. All right, so there's market pressures involved though with massive amounts of people rebelling against something. So if your operating system doesn't boot because it requires some piece of trusted hardware that nobody wants to use, the market's going to adapt. People will use other systems to get their job done and people will stop making it that fascist. Okay, or maybe the law will get involved in EFF or ACLU or somebody will sue to make that change, okay? This is a cultural issue, right? And this is the evil side. And clearly that's a battle that has to be fought. I'm not arguing that. I'm not arguing that at all. We're going to fight that battle. And there'll be people up here talking next year about the evils of TPM and some legal battle that's going on or something they're doing with some vendor. Fantastic, more power to them. You know what? There's a lot of good. Remember that graph I showed? Two lines, same slope, not making any progress. Well, let's focus on that part and assume the cultural bits because we all like to moan and groan and complain about the cultural bits. That problem will work itself out. Let's talk about the technology. So, to you, the pirate. It was a very piratey response. I think if pirates stood at the bottom of the ship of it, they probably got blown off with a cannon. Here's the inside of a TPM chip in swank little PowerPoint rounded bits. Going around the horn, you've got IO in the middle, NVRAM platform configuration registers, the PCRs. This is basically a set of information that can uniquely identify your system. It's going to have manufacturer information, serial numbers, peripheral information. This can be used to basically form a hash that uniquely identifies your system in a given state. Again, good and evil. We can all think about ways this can be used for good and bad. Adversation identity key. This is basically the key that's used to identify and authenticate the chip to off-chip entities. So if the chip needs to talk to the rest of the operating system or talk to something on the network, this is the key that gets involved. Program code. This is where it does programming stuff. That was very descriptive. RNG, it's somewhere between a really good RNG and a PRNG. Note, it won't give you the same answer every time, thanks dev random, but it's not necessarily 100% great. SHA-1 engine. Yes, we can debate how good SHA-1 is, but come on now. Key generation, because it does need to generate keys on its own for internal and external use. RSA, state management. This is, is it turned on, is it turned off? That kind of thing. And the exact engine basically executes the program code. That's your kind of processor. So here's how you interact with it. Looks a lot like a smart card. So it's told the other day that there's a reason for that. If you look at the chip model numbers for some of these vendors, there are precisely one model number off or just a letter modifier from the smart card chip from the same manufacturer. So for those that do smart card work, this looks very, it's command response. You send something to it, a bunch of hex, you get something back, a bunch of hex. There's an application, it makes a library or socket call. This particular datagram, this is the startup datagram. It's actually one that's a real pain in my butt. The first line is the type of datagram. The second line is the size. The third is a specific command, in this case start. And the fourth is a modifier. 01 is a start, it's a fresh start, gets the system up and running from the get go. Note, when a TPM chip comes online, there's two things that happen, or two things that need to happen. The first is a knit, and that's a device specific thing. That's basically when that powers it on. Click, turns on the switch. The second is a startup when you have to send it this thing. All the tools I found under Linux, pre-assume the startup command had been given. So I'm running all these programs, none of the programs are working. They're all cratering with errors like, chip not activated, like, oh, this isn't good. And all kinds of crazy crap. This is about, I don't know, four days before I'm flying out here. So I'm really not happy at this point. And then I discover, oh, I haven't turned it on yet. So I do the startup and everything starts working magically. So I'll do a demo here in a minute of how that doesn't work. High-level breakdown of TPM commands, administration stuff, opt-in ownership, key management crypto, you can all read. Probably what's most interesting here is the crypto stuff. Bind, I'm a little imprecise here. When you bind, you're encrypting a chunk of information, okay? What you can do is use either these exportable keys or non-exportable keys. Non-exportable keys, if you bind something with it, you're just encrypting it with a key that you could save off to software. So it's not really that secure. If you encrypt something with a non-exportable key, well, now you've bound it specifically to that TPM chip. But God bless you if that TPM chip goes away because cracking RSA is still kind of a hard problem and you may not be able to do it today. You can sign things, no shock. Seal, this is where you're binding, but you're also hashing it in with a PCR data. So this is where you have the data that not just uniquely identifies your computer, but then you use a specific key, you squish it all together, and now you've got a sealed thing, and then you can also sign that if you want to. So this is where you can really get interesting things where you bind specific pieces of information actually to the chip that runs on your device. Upgrades, recession management, et cetera, et cetera. So all Intel-based Macs have these TPMs. So at least I surmised when I started doing this. One night I actually flipped out because I hadn't yet written codes successfully to access it and I flipped and spent the entire night just proving to myself there was a TPM in there and discovered the guts of the internals of the advanced controller and power interface or whatever ACPI and dumped all the crap off the motherboard through the ACPI commands and actually discovered it. It was there, I went and had a few beers, I was very excited. Never fear, we've got some code to examine it because there's no interface from Apple, okay? I'm not saying this to ding Apple, there just isn't any. If you've ever tried to do Bluetooth coding on a Mac up until 10.4 was released, the only way, like you wanted to go scan for a device, it wasn't like it was on Linux if you're using Bluesy or using somebody else's stack on Windows, you called a wizard, okay? You said, I want to find a Bluetooth device, I'll call the Bluetooth device discovery wizard. Anyway, before I return and the user had a little thing pop up and then oof and then you got the Mac addressing all the information about it. When you're trying to write like actual real Bluetooth software, that may not be the most robust way of doing things, but it worked. Wi-Fi similarly on a Mac, there wasn't really good interface to that, so if you want to do war driving, somebody went and reverse engineered the header file so you could actually compile code that ran against the Apple 802.11 libraries. They're getting better, but Apple's not really in the market to make some of this stuff extensible, that's kind of Microsoft's rant in the world. Anyway, they didn't give any interface for this, so we ended up having to use Linux to make it happen. Hopefully we'll have some Apple code running at some point. Not sure if someone was giggling at me or... Anyway, Ubuntu, how many Ubuntu users? Wow, how many FreeBSD users? If you're near the Ubuntu users, punch them. FreeBSD, just if you haven't heard, is God's operating system. It's the greatest thing ever. I felt really dirty when I used Ubuntu, like I had to go talk to my FreeBSD install disk and I'm sorry, it was only a one night thing. Then I went back and recompiled the kernel and didn't tell him about it. So, there's two ways to access a TPM on a Mac. You can use custom applications and access it through LibTPM. It's a legacy thing written by IBM and it just talks to the TPM chip like any other device. There's this other thing called Trousers open source program. There's a demon called TCSD, basically grabs the device by the face, doesn't let anything else talk to it and if you want to talk to the TPM, you have to do it through that. That's Guido, okay? So, TCSD is Guido and then you do socket calls across to TCSD to access the TPM. In particular, the TPM is the Infineon IFX0101, which is version 1.1B. Here's a quick demo, or what I hope to be a quick demo. It may actually be a complete goat rope. So, here's the top of my scroll buffer. Is that big enough for folks to see? Is contrast all right? It's lousy and, well, it's a terminal. It's courier, come on now. This TPM demo, the green things up here are the default utilities that come with LibTPM. TPM demo, in theory, goes through and does all kinds of cool stuff to the chip to show you that it's working. This is what you get when you don't start the chip up. It just says error code 38, which if you flip through all the documentation in the header files, you eventually realize that it thinks that, oh, it's not turned on. So, there actually was no utility to turn it on, so I had to use my immense C coding foo to write a five-line program called TPM startup. Trust me, if you've ever seen me code, that was actually an adventure, okay? I was debugging something for the hacker arcade at Shmucon and Provere, one of the guys in Kenchota that helps run the capture of the flag, that he was debugging it for me, and he's like, dude, you've declared a function inside a conditional. Like, the function doesn't exist because your conditional didn't match and so the rest of your code doesn't run. I'm like, that's not normal? So, me trying to make this thing and then get the make files all set up, make file.in and .am, my God. I was like, you know, peyote trip, woo, is a lot of fun. So anyway, we do a TPM startup and you know, you get this incredibly verbose message that says it's been successfully started and now we'll attempt fate and see if it was true. Oh, now a bunch of crap scrolls by. Demos are great when all they show is hex, right? So a few, some applause. This is actually where Caesar's Challenge is gonna be. If anyone wants to take that. Let's see, scrolling up to the top. All we're doing is basically getting the capabilities, information, you know, what can this chip do? It's a lot of hex, I'm not gonna dive through it. There's a few interesting bits. One, we're getting PCR registers here. These PCR05, PCR06, PCR07. These are different registers that in theory could contain specific information or hashing information about configuration on my machine. Serial numbers, manufacturers, that kind of thing. Right now, what's interesting is by default there's actually nothing in them. We scroll down and at some point they start turning into Fs. So we get the higher level PCR registers. There's nothing there. Now we get all the way down to the bottom and there's two big blobs of data. These are the public keys that uniquely identify this TPM, okay? These are kind of secret, kind of not public keys. They're not something you should really let off your box but if you're really crazy you can take a picture of this and then try to own my TPM on my machine. More power to you again. Does anyone manage to Google and find Maxis' patches? See, that's just not cool. I was hoping somebody would have. Because largely I haven't found them. I'm very curious too. Anyway, these are the keys that uniquely identify my system and they'll export it. And so if I want to interact with it, if I want to assign things, put new keys on there, export keys, this is the key that I have to use to do that. I can reset the whole system and make a generated new key but that's going to screw up previous transactions that I already had. So anyway, that's the demo, all right? There's a few things that were interesting about getting this whole thing working and we actually have code now that'll run and do this automatically. First of all, this thing doesn't use a BIOS. I think a lot of you probably already know that. It uses EFI. EFI is a pretty neat little boot loading thingamabob but it's a real pain in the butt to build live CDs and make things work on it. They made this thing called Boot Camp that emulates a BIOS. I didn't want to actually touch my hard drive. I wanted to boot something to an external USB disk. So after some wrestling, I actually managed to take the live CD of Ubuntu that booted, put it onto a USB disk, upgrade it, get a new kernel on it, make all the TPM stuff work, get the configuration right, and then I tried to burn a live CD. I made about 40 coasters and decided at that point I was done and all I did was upload a gigantic tar ball up onto the website. So there's like a 680 meg tar ball which is basically a file system. Don't think more glamorous than that. You download it, you drop it on a USB disk and then you plug it in and you can boot Ubuntu and play with the TPM on your Mac. And actually it's kind of nice just to have an external Mac disk or an external, you know, Hoosy What's at Linux disk to be able to boot on your power book. So I saved you all the hassle and figured out how to do that. You can just download it and do it. Hopefully I'll make a live CD once my brain recovers. Trusted network connects. So network access control, network access protection. These are neat little buzzwords these days. You know, we're trying to protect access to our enterprise networks more and more rather than solving the entire, you know, MLS signed object trusted boot problem all at one shot. Something that TCG is doing is it has this thing called trusted network connect. The idea here is that it's a formal and normal, it's a specification to control how systems connect to the network. For all intents and purposes, it's 802.1X back ended into a TPM chip. Okay, it's doing authentication of the device with a device specific key. There's a key, comes out the TPM, gets, goes through the authentication process. Now the device, not the user, the device is authenticated on the network. You can do all the typical quarantine and patching and everything that you wanna do under the guise of TNC. Juniper has a solution that's TNC compliant. Cisco has a competing solution that is not TNC compliant. They've got their own thing. Microsoft has their own way of doing it. No one's bought into the church of TNC really heavily yet, but there are options you can actually go and deploy them if you desire. Here's a really bad diagram I lifted from a website that just has arrows and boxes that describes the TNC. Anyway, phenomenally useful slide. Other capabilities. So this is, now we've talked about the low level bits, like hey, you can encrypt data, you can sign data. That's awesome, we could do that anyway. What else can you do? Well, higher level issues, data at rest security, right? You wanna secure a hard drive or something like that. Vista has the ability to use TPM for key storage and an e-cure container. Oh my God, I didn't copy edit this at all. This is thing called BitLocker. This is kind of a funny one. How many people actually use encrypted drives and encrypted volumes for what they do? That's not bad. In the general population, the number of hands that go up are far fewer. And it strikes me that at an enterprise level, it's a real pain in the butt to do that. And after this VA laptop thing got stolen, first of all, every laptop on every desk and every corporation is a liability to that corporation. If it goes out the door, you're gonna notify somebody of some identity theft threat. It's a factor of life, you know? There's just stuff on everyone's hard drives that shouldn't leave the organization. And I don't know how CIOs sleep at night knowing that they've got people completely untrained in InfoSec and OpSec running around with these little laptops, like doot, doot, doot, doot, you know? Just oh, I left it in my car and the car door was open, now it's gone. Oops, 40 million social security numbers go away. And people, like you read these websites, you know, reporters write about this, oh, you want to comment. So I was like, oh, this all could have been solved with drive encryption. All right, give me a break. It's not that easy. It's not like you snap your fingers and you have enterprise drive encryption for 10,000 users. Maybe a few people do, but it's certainly not the norm. There aren't a lot of tools out there to do that. BitLocker technology-wise is really cool, okay? But it's not really, you know, there's no handle on how do you operate, how do you maintain this in the enterprise. Microsoft's trying to get that together. But it's not, you know, no one's really deployed this yet. We don't know what the ramifications of having globally encrypted disks with hardware keys that you have to export and track the key material and all this crap. Nobody knows how that's gonna work. So, box, exit stage left. Crypto API, no confusion if an algorithm is implemented correctly anymore because it's all in the hardware and assuming you trust the hardware then everything's okay. Note, there's an open SSL library available on the trousers website that's been compiled against the TPM so you can actually download or you can basically use open SSL back into the TPM for the crypto capabilities and key storage. Kinda cool. Remote attestation. This is where you can tell a remote device hey I'm okay or hey I'm not okay. Trusted boot. This is really the kick in the pants, right? I got a TPM, I got a secure boot loader. Trusted, provably secured, a little bit secode. Now I can load a signed kernel. Signed does not mean secure, but it means it's at least the kernel I intended to load, right? And then I can load signed drivers, signed applications and at least I know what I meant to do happened, okay? Wasn't a root kit, wasn't something else. Everything that I intended to have happen happened. Now it still could be chock full of holes and the worst smelling crap in the world but at least I know I had a good start at it. This is a really neat capability. We can't really do this easily today. I mean you can try, but this will, you know Microsoft and Linux and these other vendors are all working toward a solution that will allow us to do this. Hey, can you imagine me able to boot your box and know definitively? At least right now I know there's no root kits running without having any other security software running. No antivirus, no antispyware, nothing. It's kind of neat. It's a neat little future that we can think about. I like to call it fairy world. Types of attestation. I only got five minutes so I gotta drop the hammer here. Attestation by the TPM. This basically says I'm a TPM and I'm here. Nothing more complicated. Attestation to the platform. This basically says I'm here and I'm okay. You can trust me to tell you things. Attestation of the platform is this is where you say and the entire operating system is screwed up. Okay. So if you've gone through number one and number two you can at least trust the answer of is the whole operating system okay? They'll say no. God, no. You're owed. You got that Sony root kit, man. Rocks. Oh yeah, I got a woo for the Sony root kit up front. That side. You work for Sony, sir? Oh no. He's got the RSA barcode shirt on. That's pretty hot. Talk to me later. An authentication of the platform. This is device authentication, you know, hey, nothing, it's not rocket science. I mean, I hope I bored you in the last 20 minutes because this is not rocket science, right? Nor is it evil. But that's pretty cool. Like this is really neat stuff to be able to do and say hey, this is actually grounded in this little hardware thing that's really hard to subvert, okay? We don't have to layer a bunch of crap on top of it and then hope for the best. We can actually have a pretty reasonable proof that this thing is doing what it's supposed to do. Again, we're not solving the assurance problem. We're solving the integrity problem here. Again, I haven't said that yet, but I said it like you were all internalizing that whole statement. So first the bad opportunities are bound for loss of control of context or any computer, yeah, okay, we get it. Failed hardware is an issue. Users aren't used to thinking about key management, right? Like how many people back up their SSH keys? How many people wish they had backed up their SSH keys? There you go. And we're all smart computer security folks. When my mom calls me up and says, hey, I have to back up my TPM keys and I'm not sure how to do it, it's a bad day. It's gonna be really bad day. Operating system vendors may get territorial. I mean, honest to God, like it was pointed out by the pirate. Windows General Advantage could decide, hey, you've got Mozilla, I don't like Mozilla. I'm not gonna give you a security update. I'm not saying Microsoft's gonna do that, but the potential exists. This is where technology ends and legal and market pressures begin, okay? Let's continue to make that distinction. I'm talking about the technology and the good that can become of it. The bad clearly, bad things can become of it. That will be solved differently. If you try to make technology solve the problem of not having bad things happen, you've lost, okay? You can't make privacy, privacy because you made really cool stuff and you built the Uber secure multi-node network that people can't trace crap through. People are just gonna come steal the computer, okay? You can't solve all these problems with technology. You can't solve privacy all the way with technology. There's market and legal issues that are gonna step in. So you just kind of need to take the red pill and accept it. The good, trusted boot can make a big dent in controlling malicious code in the enterprise. Integrity of your boot process is a really cool thing. Trusted network access, we're already, the train's already left the station. We're doing 1x, we're doing knack, nap, all that kind of stuff. Now we can tighten the hardware and even more assurance of the device that's connected to the network. We have the ability to protect mobile media and other data at rest situations. The ugly, we're all distrustful, angry, drunken people and this represents a big shift for our world. It really does. It changes everything we know about InfoSec for the last 40 years. This defense and depth idea changes because now we start talking about kind of this bottom up provable security at least at a lower level. And it really, when you think and you look at MLS and you look at all these signed objects right around, this is not how we think about systems. This is not how we build systems today and it really is gonna cause us to have to change how we look at this stuff. Vista has a TPM support, but you have to have a 1.2 version TPM which there aren't a lot of them out there. This one's 1.1, most of the ones in the ThinkPads are out there, 1.1 as well. The Vista TPM developer support largely consists of documents that have function calls and then documentation to be written later underneath the function call. So there's really not a lot there. Where trust and computing is going, many systems have TPMs, not a shock, huge capability, and rants. Couple other points I wanna make before I end this thing. One, Shmucon 3. How many people went to Shmucon? Woo! All right, well I hope that got the attention in another room. So someone actually said, I've been to all three Shmucons, they were all great. There's actually only been two. That guy had a really good time in one of them apparently. Shmucon 3 will be March 23rd to 25th, 2007. People may ask why so late if you didn't in January last year. It's because I didn't get to go see my folks over Christmas time because me and my wife and everybody else were getting ready for Shmucon. So I'm being selfish with my family and we're gonna go see our folks and then we're gonna have Shmucon in March. Same place, Mormon Park, Marriott, very nice venue, slight change in structure, the highlights. We're gonna do 20 minute sessions in the afternoon on Friday, which would be kinda cool. Kinda turbo talky, but it's just gonna be one big track everybody's gonna be in it. And then we're gonna do, how many people remember the network they used to build? It was a network interop. Network interop? Interop, yeah, the big interop network and they got all these hot shots and vendors together and they plugged a bunch of stuff in, they made the network and they ran and then it became more commercial and then it got relegated to the labs and then just kinda went away. What we're gonna try to do is the same type of thing for Shmucon. We're gonna get in vendors, we're gonna get in smart people, we're gonna get in people that wanna learn and we're gonna build an enterprise class security network to run the con or else we're just gonna use twisted pair and lay it on the floor if it doesn't all work out. So one guy will be tagged with bringing like 4,000 feet of cable and crimpers and the rest of us will go build a secure network. So anyway, we hope to have some fun with that. So that's kind of the few days beforehand. We're still gonna have contest, hacker arcade, all that kind of stuff. CFP will be out next week. Tickets on sale next month. One last item. I know I'm gonna get bum rushed here off the stage. We were given the opportunity to mentor four projects under the Google Summer of Code. I wanted to mention them really quick. If you go to code.google.com slash SOC, you'll learn more about Summer of Code. I have time to explain it. Four projects. First one's Firekeeper. This is a browser level intrusion detection. Looking for cross site scripting, fishing sites, all that kind of stuff from the browser side and blocking them from causing anything bad to you. It's pretty neat. Firekeeper.mozdev.org for more information on that. GPG Grease Monkey. Client side mail encryption now via a Firefox extension. We're originally trying to do it by Grease Monkey, which is basically JavaScript, rewriting the pages on the fly to do neat stuff. The problem with JavaScript is there's this little security thing that doesn't allow you to access like local binaries on the machine, like say GPG. So the student came to me one day and said, I think I'm gonna have to write GPG in JavaScript. Is that okay? Like, no, no. I don't think that's a good idea. Like, I don't wanna see you on the news at the top of a bell tower. Ah! Des, des, eh, yes. Ah! So rather than do that, we went to a Firefox extension which doesn't have the security concerns. We currently have an implementation for Gmail and Yahoo. There's the SVN information if you wanna go get this version stuff. This will all be on my website later. This is a screenshot that's so tiny that you have no choice but to believe that it works. There are little GPG encrypt and sign things there. It's kind of funny. Johnny Long was given his hack in Hollywood talk when he had all these Gmail screens that had all this extra functionality and thought he did it all with the grease monkey. So when we started this project, me, the student asked, he said, Johnny, hey, how'd you do that? And he says, oh, I just photoshopped that. Oh, dope. So, yeah, we've now fixed that problem. Two other things. Online Rabel table lookup. Focus largely on increasing the speed of how fast you can look up hashes. There's a DNS interface. So if you're actually at a site where they can send out hashes in the clear and get passwords in the clear on port 53, you probably have no security policy problems whatsoever. Anyway, we've got basic search capabilities completed, et cetera. Open Security Framework. Pub's running this one. It's basically a framework to fly network security analysis. When I asked him for a synopsis, he says, well, you know, it's got a master client slave architecture. That was what he had. Like, what does it do? He said, oh, you could, that master could tell the clients to have the client sell slaves and the slaves do whatever you want. So there you go. Anyway, I'm out of time. Go to Shmucon. If you have questions, come track me now outside. Thanks, all.