 We want to introduce передai Guttman aunga i te reka i adaptationa sy'n Cyripsa. The US Government hau i daraion i te awakini. Aunga i te awakini aunga i te awakini i te awakini. Aunga i te awakini aunga i te awakini i te awakini i te awakini i te awakini i te awakini. So, this is a programme, what he revealed a whole bunch of programmes, but one of the most important ones is the one that's related to crypto, something called Ball Run. And it's funded to the tune of about $250 million to $300 million a year by the US government. The thing about this funding is, this is an incredibly effective programme. I don't necessarily agree with what they do, but if you look at what they're spending on this and some of the stuff I'll be talking about, and you compare it to some multi-billion dollar department of defence boondog or where they buy weapon systems that don't work, the government is actually getting really good value out of the money they're paying these guys. So, what they've developed from their own slides is capabilities against a whole bunch of standard security protocols, TLS, SSL, SSH, VPNs, webmail, you name it, they have capabilities against them. So, what they talk about in some of these documents, this is taken from a whole bunch of the documents that Snowden has released. So, what they're engaged in is, quote, an aggressive effort to defeat network security and privacy. So, they're working to defeat the encryption used in network communications technologies. In other words, the stuff that people here use every day to stay secure against the interception on the internet. So, this is something I found in one of these documents. This is taken straight from one of the NSA documents, the first rule of Bull Run Club. I'll give you a second to read that. This is, I don't think they did this deliberately, this just happened to be and do not speak about Bull Run Club. So, okay, you know, what's happened now? Crypto's been broken. So, the solution to this is we'll use better crypto. We'll use bigger keys that the NSA can't break. Or basically just do something. The thing is, as the title of the talk says, crypto will not save you. It doesn't matter what sort of crypto you use. There's a guy called Adesh Namiya who's the S in RSA. He's a really, really good cryptographer. He said something that's been paraphrased into something called Tremia's Law. Crypto is bypassed, not penetrated. So, you don't attack the crypto, you just end run around it. You attack the user, which is a really easy attack because generally crypto involves area of arts, mathematical concepts, and understanding all sorts of very sophisticated stuff. The people that write the crypto are really terrible at making it usable. You take advantage of that, the fact that the user can't understand what the crypto is actually doing or how it's working, and you attack that. You attack the user interface, the application. You attack everything else. You just don't bother attacking the crypto because there's no need to. So, let's look at an example of some of this stuff. All of the major games consoles in that list have used fairly extensive amounts of sophisticated cryptography. Some of the things they've done, they've used signed executables. Each executable is digitally signed, it can't be modified. Encrypted storage, anything written to storage is encrypted. It's decrypted in the CPU or decrypted in memory. Full media encryption and signing. The entire storage media is encrypted and signed, so it can't be modified, it can't be read. You've got encryption of data and memory. You've got on-dye key storage, so the key, the CPU itself inside the CPU has encryption keys that you can't get at, which are used to decrypt the stuff in memory. Secure co-process is a whole bunch of other stuff. Now, if you took one of these things, and you sent it back in time by about 10 years, and you said, here is a black box, and it does all these things, what is this black box? Someone back then would have said, it must be some highly secure government-designed top-secret crypto device, because nothing else would have this much security. But no, it's a games console. And yet every single one of these, no matter what was done, has been hacked, and in none of the cases was the crypto actually broken. So here's some examples. Amazon Kindle 2, they signed all their binaries with a thousand-bit key. So what the jailbreakers did... So what they did is they replaced a signing key with their own key, and once it used their own key, just to verify the signatures they could sign anything they wanted. HTC Thunderbolt. This was a bit more sophisticated. They had signed binaries, the kernel was signed, the system recovery and restart code was signed. So basically the bootstrap code was signed usually to do some sort of a recovery. You have to go into some debug mode or whatever, and there's not much security around that because it's a recovery mode. They'd even signed that. So what the attackers did is they took the signature checking code, they removed it, and at that point, it didn't matter anymore This one's a really neat hack. There's so many of these. There isn't time to cover all of them, but this one was a really neat hack. Motoroll of cell phones. They put a huge amount of crypto into this. So they chained this bootstrap process. So they took the initial secure boot process, they hashed it so you can't modify it. They used max, which are keyed hashes. So not just a hash that you can recalculate, you need to know the key in order to calculate and verify a hash. Digital signatures, everything throughout the process has something called trust zone, which is this high security zone that's a little wall enclave inside the processor. And this is secure by executive fiat. So we will tell you this is secure because we say it is. So what the attackers did is they found an exploit inside the trust zone. So inside the high security kernel, and then attack the insecure part from inside the secure part. So as far as we know, the boot loader was apparently quite good. It was the security kernel part, the trust zone that actually had all the security holes. Samsung Galaxy. So this time, instead of using a thousand-bit key, they used a 2,000-bit key. So we'll round up twice the usual number of key bits. Again, this was a neat hack. So the thing is, in order to load the image into memory to verify the signature on it, you have to have some metadata telling you where to load it and how large the image is. So what you do is you change the metadata, the loaded waste loads over the top of the signature verification code. At that point, the code is gone, and it doesn't matter whether it's signed or not. Nikon cameras. They don't solve these disputes over the veracity of a particular picture. So Nikon cameras can sign the images they take, and it's encoded into the XIF data, which is extra data that's encoded with the photo, like GPS information and stuff like that. And the signing key is encoded on the camera firmware, so you dump the firmware, you've got the key, you can then sign your own images. Canon did something even worse. They used something called HMAC, which is a keyed hash function. So the key has to be known by both the person generating the value and the person verifying it. So if I can verify a signature or a MAC value, I can also generate a fake photo with that. And again, they encoded it in the camera firmware, and it's shared across all cameras. So if you get one camera and dump the firmware, you can create fake images from any camera. Apple Airport Express, you recover it from the firmware image. ASUS Transformer. It's some pieces. I'll skiff all these. If for example Samsung Digital TV, so they used again a CMAC, which is another keyed hash function, you can recover it from the firmware, and then move the over-the-air updates. So you send in a fake update, which it assumes is from a trusted source because it's coming from what it thinks is Samsung.com, and at that point you're in the system. Google TV, there's a whole bunch of these things. So it's basically an abstract design, and then thousands of manufacturers tune out these little DV dongles and stuff like that to do it. And they will make their own mistakes. One very common one is debug modes are still enabled when the thing is shipped. Path validation isn't done properly, so you can point to an external path and there's a binary on that, and it'll run that rather than the internal binary. But some of the neat-hacks, one of the things they did was to nan-flash memory. It's not in these things, it typically isn't a solid straight drive, it doesn't look like a disk drive, it's just a block of memory. So you've got this nan-flash controller that accesses a certain location, and it uses DMA to move the data in and out. So what you do is you remap the address where it just isn't a flash controller, and you're in the system. You desolder the encrypted solid-state drive, you pull it out, you replace it with your own one with your own firmware, and then there's thousands and thousands of kernel bugs and other stuff that you can use to exploit these things. This one is a really neat one, this is Android code signing. So basically the Android APK files are effectively just a modified form of zip file. And the way you sign one of these things, because zip doesn't support signing, so you've got a bunch of files inside the zip file, you can include a few extra files, there's a manifest which says it is in here and that's signed, you've got a certificate used to sign it, and then some other files, and basically you point to all these files that are signed, you've got hashes for these, you sign all the hashes and you've got a bog, then an overall signature of everything. So what you do to exploit this is you use a custom archive tool to create a zip file in which you've got two files with the same name. The zip format is pretty free format, and you can have, if you want, ten files with exactly the same name in the zip file. Now these are verified, the signatures are verified so if you've got two files with the same name, one over writes the other. It verifies the signatures, the signatures are good. It's then loaded using standard C code, which doesn't use a hash map, and the two files are both separately loaded. One of them is unverified because of the hash map trick, and at that point you've got unverified unsigned code running inside the system. iPhone and iOS, these are, there's way too much stuff. I mean every new version that comes out has to be jailbroken, but some of the files... So I mentioned code signing. So code is executable, so you sign this and you verify it. Data pages are not executable so you don't need to verify them because you can't execute them while they're scripting and turing machines and stuff like that. But in theory data is not code. So what you do is you inject code into a data page, it's not verified because it's data not code and then you can execute it. The same thing, debugging facilities. There's something called return-oriented programming which is a bit too complicated to explain, but you can use the same thing to use the same thing to create a new program that's synthesised out of these tiny little fragments of signed code all over the place and that program then does whatever you want. And because it's running signed code, even if it's not the configuration that it was meant to be used in, it's okay. Windows secure boot. Again there's privilege escalation of things in the binaries and the kernels. Also these devices have flash memory and non-volatile RAM flash, which is the flash memory that stores configuration data and you simply skip the signature check or there's various flags in the NV RAM that say perform these checks when you clear the flags and at that point the checks are performed any more. CCC, so a bunch of hacker conferences, particularly DEFCON have used active badges. So these are badges that jasual circuit boards with processes on them and they use NFC communication or ZigBee or something like that. And so one of the challenges with these badges is the conference badge. So the CCC 2011 badge for example used a cypher called corrected block T, which is actually a fairly strong, fairly secure cypher with 128 bit key. And the idea was hack this badge. So various people did afforded exploits that all bypassed the need to break the encryption so the encryption was not broken they just bypassed the encryption in all cases and hacked the badge. And then once they did that, the key was stored inside this conference badge inside the CPU. Again once you can run your own custom code check the key and send it out and once you've done that you can take over the system. So, yeah, there's probably some sort of sign of something scary when your conference badge actually comes with a root kit. Xbox attack, this was an incredibly neat attack. This was done by a guy called Bunny Huang. So the first Xbox is used hyper transport buses and this was an unbelievably high speed bus for the time. And no standard bus analyser could redata off this bus because they couldn't cope with the speed. There were hyper transport bus analysers, probably Intel had a couple, AMD had a couple and so on. But the general public did not have anything capable of analysing this bus. And so what Bunny noticed was a couple of things. First of all hyper transport signalling uses a signalling protocol called LVDS, which is a fairly standard signalling protocol. So he didn't need a hyper transport bus analyser, he could use an LVDS trans-eva and solder that onto the bus and decode the signalling. An FPGA, which is a fuel programmable gait array, so it's like a programmable hardware, wasn't fast enough to process the data, again because of these very high data racks. So what he did is he sat down with an oscilloscope and actually characterised the signal paths through the FPGA and picked out, mapped by hand, the ones that were the fastest, so with the least delay in them. And then wrapped the stuff through this thing on the fastest possible paths. So he had a much, much faster effect of the FPGA than you could normally get by getting the computerised layout. I needed some other things, including overclocking it, which is like overclocking a CPU is sort of standard, but overclocking an FPGA is just a very wet concept, but all these tricks he used, and he managed to get the thing running fast enough to pull the raw data off the bus, which he shouldn't have been able to do, and he wrote a book about it called Hacking the Xbox, which is kind of an interesting read. Then there were later attacks, which used different attacks. So, for example, there's an old smart card hackers trick from the 1980s and 90s. A lot of the smart cards back then used small embedded controllers like 8051, the standard 8-bit controller. And because these were built on-board ROM and you could also boot off external ROM if you wanted to burn your own ROM. So you force the CPU to boot off external ROM rather than internal ROM, it's now running your code you own the system. Architectural quirks in the CPU. So Microsoft originally designed it with AMD CPUs, and the final product was shipped with Intel CPUs because they presumably gave them a better deal. And these all have tiny little architectural differences. So the code was written for the quirks of the AMD CPU and ended up running on the Intel CPU, you exploit that. And then, again, there's the code versus data thing. So true type font files are data. They're not verified, whereas in fact they can contain executable functionality. So you exploit that. With a PS3, this is a variant of the first Xbox attack. Again, this was going back to smart card attacks from the 1980s. So you don't try and read the data off the bus, you just glitch it. You glitch the memory bus. So what happens is the CPU writes to memory. It stores a copy of this in cache. You glitch the bus, and so what's actually stored in memory is different from what the CPU thinks is stored in cache. And at that point your CPU has gone out of sync with the data it stored. And, you know, you obviously you're not always going to glitch the right thing, but because it's a CPU, you don't get what you want. You just reset it, you glitch it again, and so on, iterate until you've got the CPU seeing things that aren't actually there. Xbox 360 was another glitch attack. You're doing hash comparison. You glitch the result of the comparison. So no matter what the hash value actually is, it always says it matched. Yeah, these are 15 to 20 year old card hacker tricks. So, OK, how unnecessary is it actually to attach, to attack the crypto? You know, is people actually attacking, are people actually attacking the crypto? There's a sort of a security philosopher called Dan Gea, who's said something that's kind of known as Dan Gea's law. Any security technology who's affected this, you can't measure, is indistinguishable from blind luck. We've got this black box that does lots of security. We can't measure whether it's working or not. So you can't tell whether it's just luck that that's working or not. We do actually have some metrics. Luckily for us, a sort of who's who of relative well-known companies actually carried out inadvertently an experiment on how effective or ineffective crypto actually is. So what they did is they used researchers noticed in about late 2012 that they were using what's been characterised as toy keys for decom signing. So these are the ones you can break on a single laptop. They are completely useless for actually actual security. So there were 12,000 organisations using decom and 4,000 were using keys so weak that an individual attacker with a laptop could have broken them. So okay, there's one in every three keys is so weak that they offer not zero security but laughable, let's say laughable security. The thing is, if you've got this weak crypto deploy for decom, why wasn't anyone attacking it? The result is that there were so many other ways to attack decom that nobody had to bother attacking the crypto. Anybody with a bit of technical knowledge could have broken this crypto but there wasn't any need to attack it because there were many, many other ways that were much, much easier. And again, that's back to Shamir's law, crypto was bypassed not penetrating. So here's some examples of strong crypto. Let's say you want to use some of the strongest crypto you can get, not AES128 but AES256, we want keys that go to 11. So that's the original image unencrypted. And that's the same thing encrypted with AES256 in ECB mode. That's because there are insecure encryption modes which ECB mode is one that appear to be, well, they are encrypting it's just the result is not quite what you expected. And this is an example of we need bigger keys. Well, no, bigger keys aren't going to help you. Okay, what about HSMs, hardware security modules? These are widely used in banking, they're used in CAEs, the DNS zone signing keys are stored in HSMs. And these things are very physically secure devices, they're shielded, they're tamper responsive, they detect radiation, they detect excessive temperatures or low temperatures, they detect attempts to poke into them or whatever. And they've got all the keys encrypted locked up inside them. So let's say you're banking and you're using this for PIN processing. This is how banks do PIN processing. It's pretty straightforward, you've got this thing called the customer's primary account number or PIN. You encrypt it under something called the PIN derivation key or PDK and that gives you a PIN. So all of this stuff is done inside the HSM. The PIN keys and the encryption never leaves the HSM security perimeter. The result of this encryption process is for each account you've got this unique value which is a series of hex digits. And then you use something called a decimalisation table to turn it into actual decimal numbers 0 to 9 which can be entered at an ATM PIN pad. There's an example, you use the PDK to encrypt a PIN, you get that hex string, you decimalise that using the table up there and you get 2036 and that's the final PIN. So if you've got a customer to find PIN, now that's a fixed PIN for each account. If you have a customer to find PIN, you set your own PIN, you add something called a PIN offset which is just a value you add to the actual PIN to get the customer PIN. Since the PDK encrypted PIN data is secret and never leaves the HSM the fact that the PIN offset is known doesn't make any difference, it just modifies the final result of it. So to verify this thing, you get an encrypted PIN block that comes from an ATM, you feed it into the HSM in the bank, the HSM does allot's processing inside the device and it says verified or not verified, that's all. So the only information that the HSM ever emits is a single boolean flag, yes or no. And you think this is incredibly secure, nothing ever leaves the HSM, it's got to be secure. Or not really. So these decimalisation cables are customer defined. So what you do is you use a modified table to guess each PIN digit. So what that table does, you can see that zero is mapped to one, but one is also mapped to one. So what this is doing is messing up the PIN value that's being checked. So you take your PIN block and if the HSM still reports success even though you're mapping zeros onto ones then you know that the PIN contains no zero digits. And then you try that for each one and you can knock out all the digits that are or are not present in the PIN. So once you've done this, again you haven't attacked the HSM in any way, the only information that gives you is yes or no and yet you've discovered what the digits are on the PIN. You don't know where they are, but you've discovered to discover where they are you now use the PIN offset. Again you use that fiddle decimalisation table that maps zero to one, as well as one to one. So that's the effect of that table. Then you modify it, then you take the PIN offset and you change the PIN offset to cancel out that modification you've done. So you try zero zero zero one, the HSM reports failure. OK, so it's not at that position. You move the one digit up a bit, still failure you move it up one more, it reports success. So you now know that zero digit is at that position. You do it for one digit, two digit, three digit and so on and at that point you've recovered the entire PIN from a device that only tells you verified or not verified. There's a bunch of guys in the UK who've done a lot of work on that. So that's completely bypassing the crypto using only a decimalisation table and a PIN offset. So in all of these cases, the number of attacks that actually broke the crypto were zero and the number that bypassed it was everything else. So no matter how strong the physical protection was, no matter how large the keys were, your attackers just walked around them. Here's some examples again, these are from NSA Slides. They take IPsec so basically we're out as doing VPN. The important thing is at the bottom, TIO is tailored access operations which is the NSA's hackers. Got the configuration files and again this is crypto and NSA Jagon. PSKs appreciate keys so basically they read the encryption keys out of the configuration files. They didn't care about what was being used. Another slide, the NSP had an implant. This is again NSA Jagon for either a backdoor or a piece of hardware modification or whatever. So they modified the routers, they put this implant in them and then it didn't matter what encryption was used because they had a backdoor to it. So okay, getting back to bull run. This is from the New York Times and it talks about the NSA hacking into target computers and also that they co-est the government, the companies were co-est by the government into giving up encryption keys. So okay, how would they hack in? Well, this is a one week search summary. This is actually not the complete summary. I've picked out the bits that are escalation of privilege, allows remote hackers to obtain sensitive information, allows likely users to gain admin privileges, allows man in the middle attacks. These aren't just simple denial of service. I've thrown all those out. These are the serious ones and I couldn't even fit them on one slide that actually continues for some way down there. So if you're asking how would the NSA hack into computers, well that's one weeks of the probabilities that they could use to get into computers. And there's a discussion of some of the stuff they were doing with that. So they've published complete catalogs of some of the devices they do with these. I'll put the slides up online if someone wants and I don't know if they're probably not in this area to take photos. I'll skip through these because there's pages and pages of the stuff just to illustrate. So they've got implants and exploits that work against windows, that work against wireless access points, against different types of routers, PCI bus solutions, MBR attacks, BIOS attacks, system management mode attacks and so on and so forth. But as I said, I'll put the slides online so if you really want to read through this USB man in the middling, USB links, Wi-Fi deception and so on. So yeah, no matter what you've got to cover. As I said, the Government, I'm not sort of being sarcastic here. The Government is really getting good value out of these guys. So that's the back-drawing stuff and then there's also co-rescent companies. Well, they do this through something called a national security letter and this is basically the legalised form of what's known as rubber hose crypt analysis. You get someone and you beat them with a rubber hose until they give you the keys. So what an NSL does is it means a Government agent can come to you and give you a national security letter and you're then required to hand over whatever it is they ask for and it contains a bill of gag order so you can't talk about this. If I've given you, served you with an NSL you have to give me the data and you can't tell anyone about it. So basically what this does is it bypasses the crypto service provider. So if you've got a strong encrypted link to just to say Google because we know they've been targeted it doesn't matter how strong that encrypted link to Google is because Google are forced to hand over the data. The FBI have historically way, way overused them while underreporting their actual use. In 2013 year with 19,000 NSLs issued. The only thing you can do in response to an NSL is to shut down and several providers like Laughabit, Signup Mail and some others have actually shut down so they've without any explanation simply shut down their services. Remember the gag order, they can't talk about why they're doing this. All they can do is shut down. And that was presumably because they were served with national security letters and chose to shut down rather than to comply. Now if you're Google you can't afford to do that you just have to comply. So more stuff they've done is again I've extracted the text to make it more readable covertly influence and or leverage commercial product designs, change the designs so that they appear to be functioning but they're not actually functioning like they should and some more background information on this. So the most notorious ones of these is the dual ECDRBG which is Random Number Generator. So in 1985 anti-X917 which is a banking standards committee specified a fairly simple pseudo-random number generator for generating keys for banking use so basically in ATMs. That's the entire generator. It's three lines of code. It's a pretty decent generator. It relies on the security of triple DS keys. This was 1985. And it's a decent generator. It's fairly secure. In 1998 NIST adopted it. So NIST is the National Institute of Standards and Technology, the US Government Standards Agency. They adopted it pretty much for baiting there. They did it to use AES but that was it. And then over a period of years a bunch of people at NIST hacked around with all these things. So someone came in hacked around with the generators a bit left and so on. So it was kind of like designed by committee but in series rather than in parallel. And the final thing was published in 2012 and in this publication called Special Publication 890. So some of them were pretty straightforward and sensible. There was the standard X917 generator which is half a page. It's three lines of code. Some were not. There's the hash DRBG which is five pages of description. We've now gone from three lines of code to five pages of code. These are really stupid. There's a dual ECDRBG. This is 16 pages of description. It's pages and pages of maths. It is complex. It's awkward. It's incredibly slow. This is a really, really stupid generator. The NSA pushed very hard to get it into other standards. So ANSI is the banking standards and ISO is the international standards. And you think about this and you look at this and this is so obviously a really bad generator that no one in their right mind would use this. There's a cryptographer saying that people who advocate similar things are basically nutcases. So no one would use this. Except for a pile of US companies including those guys and those guys. And ANSI made it the default in the crypto library. So OpenSSL I'm going to get sidetracked into it into a slight digression. There's a US government standard called FIPS 140, which is used to certify security software and so what OpenSSL guys did is they implemented this but they got it wrong. So the generator does not work. And then the non-functioning generator was verified by the US government as functioning and correct. And the problem is because it's now validated you can't fix the bug because that would invalidate it. So it's validated but doesn't work and if you fix it to make it work it's no longer validated. That pretty much describes the value of FIPS 140. And yes so there's some quotes from the guys that designed it. Yes so there were, again according to this guy, there were several hundred validations. There's not just one validation lots and lots of different companies got it validated. So there were several hundred validations and all of them validated this non-functioning generator as being correct. And yeah again it can't be fixed. So okay it's a stupid design so what's the problem? I don't want to get into this. There's a really good post on this which describes it in a great amount of detail. Short summary of just one issue. When two ESL or TLS clients do a handshake they exchange a 32-byte value from both sides. What this means is that this is just a random seed value that gets mixed in so that every new handshake has unique crypto keys and whatnot. If you use this generator to generate those seed values you can predict what's called the TLS pre-master secret which is the secret value that then protects the rest of the session and all your keys elect. So just by looking at a handshake if it's generated with this generator you have no more security. But the NSA wanted to make this attack even easier. So they authored or co-authored or sponsored or whatever several drafts which leaked even more information as part of this handshake. And you know maybe not all of them are going to get through if at least one of them gets through it makes this attack even easier. So why would RSA do this? Well you can kind of see their reasoning. It's specified in the US government standard they have lots of government customers they implemented all the generators in the standard including the really stupid ones and so yeah why not implement this one as well if we've got all the others in there. It's actually a lot more sinister than that though. Reported in US news agencies that they had actually paid $10 million to use this as make this the default generator in their Be Safe crypto software. So that was more than third of the revenue for the entire division that does the crypto software came from them being paid by the US government to make this broken generator the default which lead to this interesting news report. And Microsoft added it as well without the bribe obviously. They had a major customer which is basically co-talk for the Department of Defense which is probably the NSA who asked for it. OpenUSSL the same thing a sponsor requested this as a deliverable. So okay what you think about this, RSA made it the default but no one else did or there was some small company that I've never heard of before. So you have to explicitly configure this thing it's not the default. We've got an agency that news reports have said hacks into computers and makes modifications. So you've got this dangerous facility that's one bit flip away from being made the default and the thing is you as a user can't tell whether it's being used or not because it appears to work again quoting the NSA words from earlier just not actually working anymore. So this was first pointed out in 2005 in a patent filing by some cryptographers and Oscar Baheda bitters. So these are basically discussions of proof of concepts. So people have published proof of concepts where they generate the thing about these DCRBG parameters they were generated by the NSA so we don't know the secret key. Someone else generated their own parameters they know the secret key and they published this proof of concept online. And a couple of researchers later patched the generator in BSA so that was RSA's implementation and in windows and an open ESSL to use not the NSA parameters but their own parameters and they found that it was relatively easy to recover the keys. For BSA RSA's BSA, it takes four seconds to recover the key using this potentially backdoor generator. So there's more, so what else could they have backdoor? Ellipticurf cryptography that's given a general name it's just called Ellipticurf cryptography but it's not really an algorithm but the set of toothpicks is a tube of glue so you've got to assemble all these bits and pieces yourself. So to deal with this the fact that two implementations won't actually be able to talk with each other just to publish these sets of parameters known as curves which define all these things. That's an example of this curve you don't need to understand that it's just an example of all the information you need to talk with someone else for the fixed set of parameters. So okay how were they generated? Well they were generated deterministically we took this fixed seed value and derived the parameters from this fixed seed and the two can do this and verify that these parameters come from this seed. Okay stepping back a bit what's the seed value? Stepping back further where did the seed come from? Oh some guy at NSA gave it to us. Now Jerry Salinas is actually a known NSA mathematician who has published papers on ECC but nobody has ever explained what the significance of this magic value is. So how would you backdoor the NIST curves? Let's say you're a large government agency you've got lots of computing power and you want to backdoor all these curves. How would you do this? Let's say you know of an attack that takes about 2 to the 64 attempts which is well within the bounds of a large government agency and you can break one curve in a billion with these and there's some other things you need. I won't go into the mathematics behind this but basically there's ways you can recognise that you've done this successfully. So let's say you want to have a curve that passes all the NIST validation tests for these curves and yet which has certain magic properties that you like and that nobody else if they found out about them would like. It takes 78 minutes to generate one of these. Now taking that additional step of using a seed value and having to hash it to generate the curves so you've got this verifiability means it takes 7 hours. There's a great acknowledgement at the bottom of the paper I'll let you read that for a sec. So the way to do this is you've got a 1 in a billion chance of finding one of these magic seeds so you keep generating billions of curves until you get one that matches standard which is the de facto standard used by US software vendors. You push it into international standards and now you've got a global backdoor on a lipto curve or at least that set of parameters used for a lipto curve crypto. Now we don't know whether they did this or not this is merely ace if you wanted to do this this is how you could do it. There's other government standards which are even worse they just say here are these parameters use them trust us. So a bunch of guys in Europe known as Brain Pool they recognise this is a problem in 2005 well before these things were generated they said we don't know where these things came from so they generated their seeds from digits of Pi. One would hope that Pi has not been backdoored by anyone. So there's an interesting reaction to this in October 2013 there was an RFC an internet standard published that listed these Brain Pool curves so it said you've got the missed curves you can also choose to use the Brain Pool curves. It was announced on 15 October 2013 on the same day a whole bunch of open source crypto projected yep we're supporting these right now so within 12 to 24 hours of the standard being published. The open source community had said we're going to support this. Other implementations added support within several days now. This is a standards committee I mean it takes them a week to agree on where to go for coffee and yet within 12 to 24 hours I've never seen them move this fast before. IPsec it just cannot have got this bad by accident it's a great quote there from Bruce Schneier and Niels Ferguson. Nobody's satisfied with this. The documentation is hard to understand. It's full of mistakes it contradicts itself. There's no rationale you have to guess at everything. I mean yeah they actually did this on purpose. The thing is maybe not we don't know but there's a long history behind this sort of thing. This is an OECs field manual on sabotage and it talks about very sabotage techniques these are brilliant. So this is basically you know various techniques if you want to sabotage well for example a standards process or insist on doing everything through channels make speeches, refer all matters to committees bring up irrelevant issues, refer back to matters that we'd already resolved at the previous meeting but we should re-decide them now. This is a few manager or supervisor misunderstand orders, delay the delivery of orders, make mistakes in routing things. Some of these you know when training workers give incomplete or misleading instructions to lower the morale be pleasant to inefficient workers give them undeserved promotions, discriminate against deficient workers. Now how can you tell this isn't standard operating procedure at most companies but still yeah you can, that's a sabotage manual if you want to do that to sabotage let's say a committee you're proceeding there's a full list of that online if people want to grab that. So the thing is was it deliberately sabotaged probably not. I mean it's designed by committee it is a product of a large standards committee and you know the lesson that the analysis says is crypto protocols should not be designed by a committee but in any case again there's the bypass versus attack it doesn't matter the NSA have means of subverting it so these are various slides from NSA presentations on how they're subverting IPsec again it's lost in lots of technical detail but what they do do is they list all the programs they use against the hardware from different vendors so you know if you've got Cisco you've got that for Junipi you've got those programs that bypass some hardware you've got those programs that bypass them again a little bit of an aside there's a big fuss made a while back about hardware routers and that they represent an unambiguous national security threat to the US and Australia sent by Michael Hayden as the head of the NSA and so yeah we'd better buy expensive US gear because we know that's safe rather than the much cheaper Chinese gear so here's all the stuff the NSA uses to bypass and backdoor the expensive US gear great quote from the Guardian the American companies are being warned away from these dangerous Chinese routers whereas it would have been better if they'd been warned about the dangers of American made routers and again here's a quote straight from the NSA so basically they're talking about getting these routers they go out from Cisco, Junipa whatever they need to set the packaging open them up, modify them, repackage them and send them onto their destination and here's photos of that actually being done so they're unpackaging routers the table on the right where they modify them insert the implants which are the hardware or software modifications and then send them back out again again in some more comments on that and Cisco wrote to the president saying you know we really can't tolerate our own government taking our products and booby trapping them before they get out to our customers so in fact the warning about being aware of our way because they're evil should actually have been the NSA is the security threat not necessarily the Chinese so okay how do we deal with this crypto well we don't really need any special NSA proof protocols there were you know after these revelations people see if we need new NSA proof crypto well any well designed crypto is going to be NSA proof it's going to stop everybody from the NSA and CAA down to your mother and your cat you don't need custom NSA proof protocol crypto in fact some of us don't need crypto at all you know let's move everything to the cloud it's brilliant we'll move everything onto servers controlled by someone else but it's not because the cloud is where the NSA is going leverage the safety of your local server getting something from Gmail via a national security letter is relatively easy pulling it off a mail server in your back room is not that easy now the counter argument to that is that most companies aren't as good as Google at running a mail server there's this long standing financial maxim if you don't hold it you don't own it particularly like bullion investors who have been suddenly found out that there's no actual golden the vaults are found out about this there's corollary to this that if you don't hold it maybe the NSA does it goes back to this pre-crypto mechanism called geographic entitlement the modern term for this is a location limited channel I have access to for example my laptop because I'm physically standing in front of it someone on the other side of the world can't do this so you know limit access based on physical location and short distance links this is a photo of from the 1970s that's actually Ken Thompson in that photo using a early machine that he was playing around with pre-unix and this is an example of geographic entitlement to demonstrate your authority to access the computer you had to get into a locked room inside a very large building so it didn't matter how much crypto and whatever else you used because the only people who could access that computer were people who were sitting at the console right in front of it so in plain English don't put your data where the NSA can get it and there's already been pushback in Europe against exporting data to the US which unfortunately just means that the European Spooks can get it now but the NSA can't at least they've got sharing agreements with the European Spooks in which, yeah, don't go there so yeah conclusion is the most important quote is actually the top one there's a guy called Drew Gross who does computer forensics and he says he loves crypto because it tells you which part of the system not to bother attacking and I would actually modify that quote I love crypto as well because it tells me which bit of the system to look near for all the holes you get the system and it's using AES encryption and digital signatures and it's all really really strong and then the code that actually uses that they forget to check the signature verification they use hard coded keys, they use everything else so the crypto is not just something not to bother attacking it points you at all the security holes you find the crypto and the security holes are sitting right next to the crypto OK, we have a very short amount of time for questions or otherwise grab me afterwards if people have a question please put your hand up and someone will appear with a microphone Questions anyone? Not so much a question as I read that some agency in the states put in radio transmitters into lots of PCs to get information from so even locker room may not be sufficient Some of the fingers and all electronic radiates huge amounts of radio frequency noise well not huge as in enough to affect you but certainly immeasurable quantities of radio frequency noise so you don't necessarily need to put in a radio transmitter you can pick up electronic leaks from laptops from PCs and so on even again it doesn't have to be an explicit transmitter if you've got a PC that's got shielding around it so it's not radiating noise you can put a small break in the shielding so it radiates more than it's supposed to now you can examine that thing and it seems to be perfectly OK there's no extra bugs there's no extra hardware planted into it there's no modifications it just happens that if you've got the right gear you can exploit the fact that there's a break in the shielding and it's radiating stuff so you don't Implanting a radio transmitter is a very visible sign of tampering cracking the shielding or replacing aluminium piece of aluminium foil on our aluminium plate that's colour plastic shiny plastic that doesn't stop the radiation it's much simpler to do and you can't really detect it that easily Any other questions? I'll come over for that one I cannot relate to her hardware attacks Do we do anything about how safe keyboards are nowadays? I would personally put a big memory chip in them to record everything I type if I ship those things So you're asking about radiation from keyboards Yeah I mean you're saying today you can attack other things than just so one of them is attacking the keyboard that things are... I would not type a crypto key using a wireless keyboard because they're horribly insecure I mean the back shape that's on the inside that I can't see what's inside that shape or how much ram it has So that assumes for one thing that assumes someone who has actually got at your physical device or is in close proximity to you and if this is getting into the sort of tinfoil hat stuff If you're enough of a target for a government that they're going to break into your house and back to your laptop or they'll stand outside your window then I think no matter what you do they're going to get you in some way or other So another defense is don't be a target Right We time for one more question at the moment Oh yep Go far away So this makes me think that the real security that we have is just obscurity is being nobody is being someone that seems to be so unimportant that they never bother looking Is that reasonable? Again, don't be a target is a really, really good defense because as the NSA has expanded their surveillance more and more, they're drowning in data So as long as you don't stick out if you're one of a billion people sending boring text messages they're probably not going to spot you So yeah, don't be a target is a really good strategy We time for one short question So further to that point what do you think your status is now that your web browser is full of all searches for these links Yeah, no, I'm I did a talk at a conference called KiwiCon on late last year on the history of defusing bombs, but Germany used electronic bomb fuses in World War II and I did a talk about that, basically defusing these electronic bomb fuses, so hacking bomb fuses and I've got enough hits and I'm probably in about a thousand mailings so I'm searching for bomb fuses and defusing and stuff like that, so yeah, I'm a lost cause He's a mining student, he's on the watchers Yeah So unfortunately we'll have to conclude the recording here but is there a demand for questions after the recording going on? Okay, we'll wrap up for the point here On behalf of the LCA we would like to congratulate you and thank you for your speech and a token of appreciation Thank you