 Joel asks, what are the odds of a rogue developer or group of developers slowly inserting malicious code into the code base... in order to create a security flaw? Is this an attack factor intelligence agencies can use to undermine Bitcoin? That's a great question. One of the advantages supposedly of open-source code is that many eyes make bugs shallow. I can't remember who said that. It might have been Eric Raymond. The idea being that when you have open-source development, you have many, many people reviewing and testing and working with the code, which then ensures that if there is a bug, it's quickly discovered. What about deliberately inserting a bug into the code? It's not as easy as that seems, but it's also not impossible. Even though the idea is that open-source development creates opportunities for review, we have seen that in many widely used, well-tested, well-reviewed, supposedly well-audited projects that are open-source projects, vulnerabilities have been discovered, some of them really, really serious vulnerabilities, one would say fatal vulnerabilities. A great example of that is the SSL libraries, OpenSSL, which form the backbone of most of the web with HTTPS and transport-layer security, TLS also known as SSL before, and several bugs like Heartbleed and others that have been discovered in these SSL libraries. Interestingly enough, one of the ways that some of those bugs got discovered was because of the extensive testing done by different open-source teams working in cryptocurrency, such as the Bitcoin Core development team. In the process of adding extensive test suites in the code for Bitcoin Core, when it used to use openSSL, Bitcoin Core developers discovered a couple of previously unknown bugs within the openSSL infrastructure. They tested openSSL better than it had been tested up to that point and discovered bugs. Today, Bitcoin doesn't use openSSL, so the Bitcoin Core implementation has switched to a from scratch implementation of the cryptographic primitives called libsecp256k1, which is a very fast library and a very, very well-tested and robust library for doing the fundamental cryptographic operations of SHA-256 and Elliptic Curve cryptography that are used within Bitcoin. So, less likely that you would find the security flaw. Another reason why it's very hard to introduce security flaws into these protocols is that extensive testing. So, building these batteries of tests that take every piece of code, and they just subject it to this rigorous, grueling testing, to find edge cases and exceptions and problems within the code, make it difficult to slip something in a way that it won't be noticed. Also, developers who are working on this code would be highly suspicious of someone coming in who has no prior involvement in the code, and suddenly tinkering with some of the really important cryptographic primitives with new code that's not well understood. Not as easy to pull off, but, again, not impossible. So, in many cases, you've got to think of security in these cryptosystems as being a matter of defense in depth. That means having layers of defense, so that if there is a flaw introduced in one of the cryptographic primitives, there are other ways to protect against that flaw from being catastrophic across a broad range of addresses. For example, let's take ECDSA, Digital Signatures, which is a key component of Bitcoin. So, digital signatures, when applied to Bitcoin transactions, reveal the public key for the first time. But until the moment a transaction is broadcast, the public key isn't known. The reason for that is because, instead of a public key, what's encoded in the recipient address is a double-hashed public key. The public key is not revealed. It's only revealed when you spend from it. If your wallet uses good practices, where it only uses a single address once, and then immediately moves the money with change to a different address, then the very first time the public key is revealed is also the last time that public key is ever used, and that address is already empty at that point. That offers a layer of security that, even in the case where there is some kind of problem with a digital signature, for example, one of the problems you might have with a digital signature is the use of insufficient entropy in the digital signature, then there's a problem. That digital signature is compromised, and maybe the public key and private key are compromised, but it's compromised for an address that's already used and will never be used again, and has no money in it. So, no harm. Again, these are some of the mechanisms that are used to layer security, so that there isn't a dependence on a single critical piece of code, or cryptography, or security throughout the system that will cause a broad-based catastrophic outcome in the case of compromise. So, rogue developers. Less to fear than perhaps rogue waves when you're traveling in the ocean, or rogue volcanoes if you're having a fun-tump hanging out in a residential neighborhood. Rogue developers are probably low on the list of things you should be worried about. Anonymous Coward asks, hardware wallet third-party trust. Does the trust that hardware wallet users place in the hands of the hardware wallet developers worry you? Are hardware wallet firmware updates undertaken at the trust of these third parties, and therefore with some risk to users? Can you envision an average user-friendly hardware wallet solution that wouldn't ever require firmware updates? All right. Yes, there is a level of trust involved in doing a firmware update on a hardware wallet. Generally speaking, you might want to wait until it's being audited or reviewed by some security professionals. And I prefer hardware wallets that have open-source firmware, in which case it can be audited and reviewed in the source code format, and people can anticipate what effects an upgrade will have. But in general, you've got to consider the fact that hardware wallets have layers of security and protection in them. Meaning that, yes, the firmware can compromise your private keys, but one of the things you might do, in addition to your seat, is have a passphrase. The firmware can also compromise that. It can also compromise the entropy of the keys being generated. For most users, that level of trust in the manufacturers of the hardware wallet is less than the trust they'd have to put into the hardware that comes on a laptop, or the operating system that they've installed on a laptop, or a software wallet that they've installed, or a paper wallet generator that they've installed from somewhere. So, in general, there is always a level of trust that you have to put in both the hardware and software devices, and the pipeline that gets you those hardware and software devices, in order to operate in this space. Of all of the possible combinations of hardware and software that you have to trust in order to operate with cryptographic keys, hardware wallets are probably one of the best mechanisms for storing private keys. The average user can't do a complete DIY solution. When I say a complete DIY solution, that does not mean taking an old laptop, wiping the hard drive, installing tails, or some other Linux operating system that you downloaded, put on a USB stick, and then generating a paper wallet. That basically is you trying to build your own hardware wallet with hardware that is much more suspect, and software that is much more suspect, and has a thousand times more exposure surface to attack, from the entropy generators to man-in-the-middle attacks, so when you're downloading the operating system, and all of these hidden risks that you don't appreciate, that's not what I mean when I say DIY. Honestly, the real DIY of doing private keys involves using an offline entropy source, like flipping a coin, or casting dice, throwing dice, well-balanced casino-quality dice, or something like that, and constructing your own private key. That's for the ultra-paranoid. Yes, you could construct a private key. You wouldn't be able to... You'd have to do all of the ECDSA math on paper. You'd have to produce your public key on paper. It's possible. You could even double-hash that public key and produce an address, and receive and transmit money to that address, without it ever touching a computer at all, other than in the form of an address. You could do all of that manually. And if you could, bless you, you have fantastic knowledge of computers and mathematics, and the confidence to execute on that. 99.999% of all cryptography and cryptocurrency users will never have that level of skill or the confidence to execute on that. And so, I think it's important to recognize that, while hardware wallets are not perfect, they are far better than all of the other solutions in terms of delivering usable practical security to non-expert users that make cryptocurrencies easier to use for everyone. And I think we're going to see better and better security developed in hardware wallets. Keep in mind, we're still in maybe the second generation of hardware wallets. It's a very rapidly expanding field, with a lot of research and work going on. A lot of great security researchers who are doing great work finding vulnerabilities and having them fixed. So, just because there is a level of trust in the hardware, and just because there is the possibility of all of our policing hardware wallets, doesn't mean that you should try and do some kind of elaborate scheme, where you try to build your own hardware wallets with an old laptop, or download on an operating system and a paper wallet generator, because you are inadvertently introducing far, far more risk, and you don't even know it.