 I was going to have to hustle through my slides, so you'll just have to bear with me. The presentation is intended to be reference material. I was going to go through it pretty fast anyways. Google is your friend. If there's something on there, you don't understand. I'm going to try and explain very, you know, 10,000-foot view, but it should be fairly clear if you actually are interested in the topic and want to dig in. You can find all this material, explanations of what they are on the web. It doesn't seem to be working. Oh, sure. Why not? So you're going to have to use a keypad. Okay. I'm John Mahaffey. I actually work for Mentor Graphics. They know the printed materials say I'm from Montevista, but probably you've heard that the mentor has acquired the Montevista Automotive Business Unit. Oh, that's the best way to do this. Here we go. So in 2005, I was the Bobolytics Architect for Montevista, doing a lot of mobile and power and fast boot and security sort of things for mobile phones. After that, I was the Automotive Architect at Montevista. About a year ago, I became the security lead for Genevi, the system infrastructure expert cube. So I'm working with the security team inside Genevi to help define automotive security. This talk is not about automotive security per se. It's more for just general embedded use. As of about a month ago, I became the senior system architect for Mentor Embedded. So this is about embedded systems. It's not about IT security. It's a very different beast. The key piece of IT security is that you have to have physical security. For an embedded device, you don't own the device. So physical security is not possible. In general, you're not protecting Fort Knox, so it's not as big a deal. You know, you don't have billions of dollars worth of data writing on it. If you are protecting Fort Knox, there's a lot more you have to go through. And look up common criteria if you want to see what kind of pain you're in for. In general, hackers are a lazy breed. They're going to go for low-hanging fruit. So this is intended to let you be not low-hanging fruit. There's a little joke about a couple of campers who are confronted by an angry bear. And one of the campers sits down and starts tying his shoes. The other camper goes, well, why are you tying your shoes? You can't outrun that bear. He goes, well, I don't have to outrun the bear. I only have to outrun you. You make your system hard to hack, and they'll go after easier things. A quick taxonomy of black hats. Generally, in the security community, they don't call them hackers. They're good hackers and bad hackers, and bad hackers are called black hats. At the lowest level, they're called script kiddies. These are people who really don't know what they're doing. They've found something interesting. Their friend told them about Metasploit or something, and they've tried it out. And wow, they got into somebody's system, and it isn't this cool. And they're easily defended against. They're easily caught. The next level are what we call opportunists. There are people who are maybe insiders who have inside information and possibly an opportunity presents itself for them to use their information, possibly sell it. There are bots scouring the net at all times, looking for vulnerabilities, looking for open ports. That's another type of opportunist. Then there are the grudge hackers. These are people who are probably in the same category as script kiddies. They don't really know what they're doing, but they're motivated. They want to punish the company for laying them off or whatever their grudge is. They're in general. A denial of service attack is fine for them. That's not a profit motive. And then we have the organized black hats, what they call a hacktivist, somebody who's got an axe to grind. Possibly denial of service is what they're after. They don't like Microsoft or they don't like Red Hat or whatever it is. Organized crime. So there's well-funded organizations that create botnets. They actually spend money. If you create a virus and you can create a botnet for them, they will pay you money for your botnet, and then they can sell that to other organized criminals. Then we have the nation states. We all heard about the Chinese government hacking into the United States. Well, there's the United States and Israel hacking into Iran, and there's the Russian government. They call them nation states. Government isn't a broad enough term, apparently. So we'll get into the best practices talk. I'm going to go over some design best practices. It starts at the root of trust. If you can't trust what operating system, if you can't trust your bootloader, then you can't trust it to bring in the right operating system. If you can't trust your operating system, then your applications can't trust that they're going to be doing what you designed them to do. So it all starts with a secure boot. There are many commercial vendors of it. ARM has a trust zone and free-scale and TI, and all the semiconductors, they've got their own versions of secure boot, but it starts with a root of trust. You should, if you're connecting to the Internet in any significant way and communicating things of value, you should have a hardware crypto module. Typically, it would have a cryptographic engine for encrypting and decrypting. You would have some one-time programmable cryptographic keys. Your secret master key. Don't use the same secret master key on every one of your devices. Part of it is, these keys are crackable, and I'll get into that a little bit later, but you don't want one crack key to lead to a class of devices that are compromised. Random number generator is a good thing to have. A lot of the communication protocols require good random numbers. Trusted boot we just talked about. One-time programmable data, things like serial numbers, flags that say what state machine is in, those sorts of things, a monotonic counter so that you can tell if there's a man in the middle trying to spoof your communication. Secure RAM for holding decrypted information that you want to keep secure. Tamper detection to see if the device has been opened or possibly power glitches or over-tamp things to try to pre-fault attacks. Other things are possible. You've got to ask yourself how much are you willing to pay and how valuable is your data, but there's always a compromise. You're going to want to harden your hardware. A lot of you are Linux hackers and believe firmly in the GPLv3, but often that's at odds with a secure device that needs to communicate with a secure infrastructure and is doing monetary type of transactions. By limiting access to the hardware, for instance by potting or using glue instead of screws, those sorts of things, it makes it harder for people to hack your device. Now, I know I love to hack my devices, but this isn't about that. You want to reduce electromagnetic emissions, wrap it in foil, just actually take it and put it on the radio range and see what kind of emissions you've got because the bad guys go out and they've got radios and they can tell what your machine is doing by what it's putting out. Be wary of GPLv3. You don't absolutely have to avoid it, but if you do have GPLv3 code on your machine, you are legally required to accept unverified code from anybody who wants to change your device. If you do accept unverified code, you might want to actually have a one-time programmable modified flag inside your crypto module. That would say, it to your services infrastructure, as a packet comes in that's got the right sign key and everything on it, it would have this flag attached and you would say, I'm sorry, I can't trust that this packet is actually what it says it is. Use Linux security modules. There are a number of them that you can use probably the most well-known of them is SE Linux. SE Linux is very good. It's verified by the NSA, National Security Agency for use in secure machines. It's well supported in most of the Linux infrastructure. It's also the hardest to configure and get it right and getting it right is very important to security. The next level down would be App Armor, which is path-based, whereas SE Linux is iNode-based. One of the problems with embedded systems is a lot of your embedded file systems don't have the extended attributes that are required for SE Linux, whereas a path-based security system, such as App Armor or SMAC, just uses the path to the file rather than the actual iNode for file access verification. SMAC is simplified mandatory access control kernel. It's probably good enough for most applications. It's kind of like flossing your teeth. It doesn't really matter whether you use waxed floss or unwaxed floss. What matters is that you floss. So put in a mandatory access control system. Unfortunately, the three main ones here that I've shown are not compatible with each other. They use the same concepts and the same LSM modules, but their configuration files are not compatible at all. So if you choose one and then have to switch to another, it's sort of like starting over from a configuration standpoint. There are others to Mojolinux, and there's probably a dozen of them. Linux containers. There was a talk yesterday about namespaces in Linux containers. So if you want to look at Jake Edge's talk, it was pretty good material. You should plan in the design phase for field updates. If you're connecting to the Internet, you're probably going to have issues at some point with something that you didn't get quite right. And if you plan for field updating up front, then it's not too painful for you. You have a question of push versus pull, but that's a policy decision. Push stuff out to the device, or do you want to require users to say, yes, go ahead and give me the latest security? Protect your data. In security circles, they have two forms of data. We have data at rest, which is the data that's actually on your device, how it got there is not relevant. And for sensitive data, such as address books or musical or video content, you can use an encrypted file system such as eCryptFS, but you don't want your device to fall in the wrong hands and have easy access to those sorts of sensitive data. And then there's data in motion, things that come into your device, either from a USB stick or over the network, or however you access your infrastructure providers. Use secure socket layers, transport level security, which is the successor to secure socket layers. IPsec, these are all users of random numbers, so you've got to keep your entropy up to use these. If you don't have enough good entropy in your system, these communication mechanisms will slow way down, something to be aware of. Development best practices, this is kind of like programming 101, but it works. Peer reviews, do them. People are proud of their work, and they're not going to put something out there for other people to look at and criticize if it's not good. So this forces you to do your best job creating the code. Put comments in your code. I know it's hard for people to do that. The code is definitely the last resort. What it does is what the code is written, but if the comments are not there at all, it's very hard to review and it's very hard to modify, and that becomes a source of vulnerabilities. Use coding standards. There's Linux coding standard. Every large corporation has their own coding standard. It doesn't really matter what the coding standard is, but if you use it and everybody uses it, then it's a lot easier to follow the code as you start going through unfamiliar code. Basic stuff. Simplicity. There's power in simplicity. You want to keep your routine small and simple and easy to understand and easy to use. Quote from Wikipedia in the vulnerability section that a large source of vulnerabilities is constructs in programming languages, whatever your language is, and not necessarily just C. If they're hard to use, then people will use them incorrectly, and that will create holes for hackers to exploit. Randomize. A lot of the PACS project has had a number of things accepted into mainline. Address-based layout randomization. There are others. They've done some things with vinytils to help harden programs. You know, go out to the PACS website and look at it. As I mentioned, keep your entropy up. Enroll IRQs that are based on random events, such as keyboards and networking events and so on. Don't enroll your tick. It's not random. There was an interesting project called HAVGED that it created entropy out of the variations in cache, you know, how the access patterns in cache change, and it actually turned out to be a good source of entropy. Don't mess with the PACS penguin. Test your system. It's basic stuff. You need to plan. Have a QA plan. Write it down. Test your error paths. So a lot of code is tested in the main, but you end up not really covering a lot of unusual conditions. And you can bet that a hacker who's trying to penetrate your system, he's going to start typing with his left elbow and, you know, spin in circles and do random things, and if you see something unusual, he's going to say, oh, okay, well, let's do that again. If I can reproduce it, maybe there's a hole there, and that's how things are found. Do coverage analysis. See how good your test coverage is. GCOG is a good tool there, pretty cool. If I'm going too fast, let me know. I'm not sure what my time left is. Got about 15 minutes. There are a lot of static analysis tools that are available, both free and for pay. Your primary first line of defense should be the compiler. It's basic. There are flags you can set in your compiler, WFormat and WFormat Security, that will check your format strings for vulnerabilities. One of the basic vulnerabilities is if you don't have a format statement in your input, then somebody can put in a percent S as part of their input, and the library will interpret that as a format. That's how one of the ways that malicious code gets injected into your system. Obviously, check your inputs, make sure they're in bounds. Those sorts of things are all pretty basic. The compiler has FStackProtector, which can put canaries in your stack and detect stack smashing attacks and create an error if it detects a stack overflow. Define fortify source. That will put in checks for your buffers, for buffer overflow attacks. Wikipedia, the Google and the network are your friend. There's a list of tools for static code analysis on the wiki. It's very extensive. There's just a few. There's your basic lint and java lint and pylint and so on. You've got the bash checker, so your skip checker, the C-language checker, YASCA, which is a security checking framework that has plugable modules. There's quite a few for pay static analysis tools from Blackduck and Coverity, Clockwork, Rational and others. Depending on your financing and the level of management support that you get, those are all good tools as well. There are dynamic analysis tools. For memory management, you've got Demalik, Electric Fence, Mtrace, Mpatrol. Valgrind is a very good tool. It's an emulator that emulates your machine. Of course, as an emulator, your hardware might perhaps do things that are not anticipated by Valgrind, so it's not guaranteed to find everything, but it's a very good way to get a lot of data about what your program is doing, and if you examine the output, you might find things that you didn't expect. Avalanche, Glassbox, G-Perf tools, JRATs. Some of these are performance analyzers. Some are for security vulnerabilities. Again, the commercial ones, Purify and Rational, both have dynamic analysis tools. Avalanche is an interesting one. It tries to create inputs and find error paths, and so it creates files that it calls the input of depth. So when it finds an input combination that crashes your program, it'll save it off for you, and it actually goes in and branches out and finds lots of interesting things. It's an automated testing tool. Before you deploy, you got to do your checklist. You want to close all the ports that you had open, all your telnet ports and SSH, and your serial port, the things that you really don't want people to be using in their device. Limit as much access as you can. Remove all the debug and the hacks that you put in so it makes it easier to get into your system and test it. That all needs to be removed before you deploy. Don't assume obscurity will protect you. Obscurity by obscurity is well proven not to work. Run the tools that hackers use. Go out and run the port scans. Run Metasploit or the GUI version called Armitage. These are free tools. The hackers are using them. Don the Ripper, check your passwords. Not all devices have passwords, but if you do, these are good things to do. There are many, many more tools. Any good security site will have a bunch of those that you can use. Hire a professional. If your device is valuable and your expected revenue will cover the cost of it, hire somebody to do a penetration test or a pen test, as they call them. A hacker that can possibly either go in cold, which costs you more money, or you can tell him this is what my system does and this is how it does it. Can you get in? They will often find a way. That gives you more information on what to close. This month, earlier, a few weeks ago, security researchers found a hack into the file for industrial control system that's used by military hospitals, factories that does surveillance and alarms and door locks. What a great thing for spy. All I can do is do this hack and I can get into your building and you don't even know it. The interesting thing was, the company last year, late last year, said it believed attacks on that system were unlikely because the systems were obscure and hackers don't traditionally target this. That was probably like waving a red flag in front of a bull. The lesson is clear, I think. Do not rely on obscurity. Hackers will come in and they'll find their way in and you won't know about it until something bad happens because of it. That's what's called the window of vulnerability. From the time that they find it until the time that you find out about it is called zero day. It's a very difficult problem. Every system has faults. Perfection is not possible. Don't count on your system being perfect when you ship it. You need to follow the lists. There's quite a few. The National Vulnerability Database at NIST puts out, exploitdb.com, securitytracker.com. Those are all real good resources for finding problems. You need to analyze them, of course, and say, does this issue apply to my system? But if it does, you need to deploy your fixes proactively. You or the distro that you base on should be following CVEs and providing fixes. The security community reports vulnerabilities when they're found to cve.mitre.org. Generally, MITRE will embargo the actual details for a period of time, maybe a week or maybe a little bit more depending, to give the distributions time to come up with a fix for it. So the distros actually have access to that information. I guess if you work for a big enough company, you probably have access to that as well. Another good source of information, ossecurity.openwall.org has a security wiki for Linux. It's got a lot of really good information on it. So a little bit about attacks. So we have timing attacks, which is a crypto module, a cryptographic algorithm. It takes a various amount of time, depending on, for instance, the number of ones in the key. And so by checking how long does it take to encrypt a certain clear text, it will leak information about your keys. So the more information that a hacker can get about your key, the closer he is to cracking it. There are fault attacks. So you can induce faults in computers. It's well known. Alpha particles. You can induce eddy currents using electromagnets. You can heat it up. Semiconductors don't work well at high temperatures. The RSA algorithm, if you have a certain ciphertext with a known good output, a certain clear text with a known ciphertext output, and then you can induce a fault in that that gives you a different ciphertext output. And a few of those, and you can actually crack the key. Just a few. It doesn't take many. Similar to timing attacks, you can power analysis. The algorithms use different amounts of power based on the key and the clear text. Port scans. There are lots of ports, and there are bots out there scanning ports all the time. There are denial of service attacks where you don't really care if you compromise your system. You just keep it from doing any useful work. And then escalation of privilege attacks. The defense. For timing attacks, you need to either randomize, you know, put some random things into your encryption algorithm or standardize your timing so that everything takes about the same amount of time, regardless of the input. For fault attacks, tamper detection module is a good thing. It can detect physical. Did somebody dissolve the cap on the SOC? Did somebody glitching your power or your clock? Radiation, things like ions and alpha particles and, you know, even photons. You can put detectors in that will detect such things. Temperature, over-tamp conditions, anti-currents, these are all possible to detect. You know, if you've got enough money, you can put lots of things in there. With enough money, redundancy is a good check against a fault attack. If you have two parallel pieces that are doing the same thing and they come up with the same result, there's a good chance that it hasn't been tampered with. Power analysis attacks. Again, randomization inside your crypto module. Port scans. Just close your ports. That's got to be on your checklist before you go out the door. Denial of service attacks. Use things like C-groups and quotas to keep runaway programs. It doesn't even have to be malicious. Programs sometimes get runaway just because they have a bug in them. By limiting the amount of CPU or the amount of disk space or various resources that they use, that lets you limit the denial of service. And to combat escalation of privilege attacks. Mandatory access control systems. Use containers. Illytic containers or, in other words, name spaces for that. What am I doing on time? I'm just about out. You want to give me another five minutes and I'll go real quick through this. This is the kind of things that is more in line with information security systems and for high value things such as automobiles, EV systems, things like that. You have your CIA triad. I just thought it was a cool graphic. But confidentiality, integrity, and availability are the three cornerstones of information security. We use a security methodology. You start out by defining what are your assets? What are you protecting? What is key on your system to protect? And then you define the threats to those assets. And that helps to define your security strategy. And then you take those threats and you categorize them using... There's a Microsoft methodology called Stride and Dread. And Geneva is using the TVRA method, threat, vulnerability, and risk analysis from the European Telecommunication Standards Institute. Stride stands for the various categories, spoofing, tampering, repudiation, information disclosure, denial of service, and escalation of privilege. Dread is what you use to categorize the effect of a vulnerability. What is the potential for damage? How reproducible is it? If you can reproduce it, how exploitable is it? How many users are affected by it? It's one versus everybody. And how discoverable is it? How obscure is it? So the typical method would be a sign of value between 0 and 5 for each of these, 0 and 10 for each of these. Add them up and divide by 5. So the Geneva Automotive Consortium uses the ETSI TVRA method, which is similar to Stride and Dread. There's a little bit of alternative. So I can go over that. I was going to show an example of some of the categorization that we had in Geneva, but it's on my other laptop. So questions. Now this talk, I uploaded a prior version of it for the website, so this exact talk is not there. I'm assuming they'll let me upload this one post after the conference, but if anybody really wants a copy of this, just drop me a business card and I'll send you a copy. I'm also going to upload it to the Geneva site if you have access to that. Sorry, it's kind of like drinking from a fire hose. As I said, I intended this to be sort of a reference material for people who want to dig deeper. Certainly if you have questions, feel free to contact me. And if there are no questions for the broader group, then thank you very much for your attention.