 Briz asks, Bitcoin network uptime is quoted as 99.984%. Please tell us about the other 0.016%. I've read about technicals of the two downtime events in 2010 and 2013. I'm curious what you know about the fundamental aspects of these events and how the community reacted or overreacted. Also, I've read that the alert system was removed in 2016. How would an emergency situation be handled? Now, that's the question from Briz. Let's talk about exactly what happened in those two events. The first downtime event was in 2010. It was an event where there was a vulnerability discovered in Bitcoin. It's actually CV 2010 5139. As you can see, someone helpful provided the information in the question. Basically, what happened there was someone exploited vulnerability, which was a value overflow or underflow vulnerability, which allowed them to basically produce 184 billion Bitcoin. As the coin-based transaction of one of the blocks. Of course, the reward was supposed to be just 50 Bitcoin at the time per block. Instead, it was 184 billion, so just slightly over. That event basically was a real classic software vulnerability within Bitcoin. In fact, that's fairly rare. We haven't really seen this kind of systemic vulnerability. That caused an emergency response. The consensus of the network was that this was an invalid block. Because consensus rules that everyone thought they had agreed to, did not allow for such an event to happen. Even if the software implementation misapplied those rules and did allow for that to happen. As a result, a patch was issued. People upgraded their software. That block was reversed because the software recognized it as invalid. A chain reorganization occurred. It took about six and a half hours for the new software release to happen. People to upgrade their nodes and mining nodes and for that block to be reorganized. All of the transactions, except of course for the 184 billion Bitcoin transaction to be replayed. The second incident, and I did not witness this incident. Because this was before I got involved in Bitcoin. I didn't know anything about this. I just read about it afterwards. The second incident, however, I did witness. It was really quite fascinating. The second incident happened in 2013. That was more specifically a problem in the implementation of the underlying database during a software upgrade. At the time, Bitcoin software was running the 0.7 version of the software. The underlying database of blocks and transactions was stored in Berkeley DB, which is a type of database. That was for the storage. The underlying software, Berkeley DB, had an unfortunate bug in it in that it could not open more than 1024 file descriptors. In the process of upgrading the database in version 0.8 of the Bitcoin client that was released. I believe this happened in April of 2013 or something like that. Versions 0.8 changed the underlying database to level DB. Until that moment in time, the database infrastructure of Bitcoin Core was considered non-consensus. Meaning that changes to the database shouldn't affect consensus. Because the consensus code does not consider the database relevant. It's simply something that is invisible to the consensus rules or should be invisible to the consensus rules. In this particular case, what happened effectively was an abstraction layer of violation. Where the abstraction of the database wasn't effective in shielding the consensus rules from an unfortunate bug in the underlying database. During the upgrade to 0.8, one of the miners that had upgraded to 0.8 created a block that had more than 1024 transactions. That block was perfectly handled by the level DB database and the implementation of Bitcoin of 0.8 version that was running that. But unfortunately, any 0.7 client that encountered this block when it was broadcast on the network choked. What I mean by choked is in the process of trying to verify this block. The system attempted to open 1024 and then 1025 file descriptors and crash Bitcoin Core. Bitcoin Core 0.7 client started crashing. In this particular case, what would happen is they would crash, reboot, and upon starting up they would contact the network and say, you got any new blocks and they'd receive the same block that at this point had already become part of the blockchain. And try to verify it and crash again and then reboot and crash again and reboot and crash again. Every time they'd get presented with this block, they'd choke. It was decided at that point to roll back 0.8 and who makes that decision? That's a really critical question. So alerts went out in a variety of ways. Some of them use the built-in Bitcoin alert system that existed then where people could issue a message signed by a handful of digital keys that were available to some participants in the development process to be able to notify people of some emergency situation. This was introduced by Satoshi and the original client or in the early stages if I remember correctly. The purpose of this was when there were very few communication channels for Bitcoin to have a way to reach all node operators with an emergency alert by sending it over the very same P2P network as Bitcoin. However, there are some significant challenges and risks with such an approach. It was deprecated in 2016 because at this point there are many communication channels by which node operators as well as miners can communicate and do communicate in emergencies. In this case, I actually did not receive the alert about this, but what I noticed was that my node was misbehaving. I was actually doing some work at the time and I started reading online that blocks were being delayed. So block times went up to almost 20 minutes because half the hash power of the network, which was running still 0.7, kept crushing. So effectively the hash rate dropped dramatically because only 0.8 clients were able to continue. As a result, blocks will come down more slowly. I immediately went on to the Bitcoin developers IRC. I got contacted by a number of exchanges and others who were trying to figure out what was going on. I put them in touch with some of the core developers they didn't know at the time. I made some introductions and I wasn't in a position to help beyond that. So I just raised alarm with a few people that I knew. Then watched and what happened next was really interesting because very quickly consensus was arrived at. Again, about six hours later a new version was released that basically rolled back the chain by reorganizing it around the 0.7 clients. No transactions were lost, no double spends occurred, but effectively what happened was everybody was running a 0.8 client turned it off and 0.7 clients continued the chain and then that upgrade was postponed. It led to a 26 block divergence at the time and a 26 block reorganization. You can actually see that in there's a few websites that have snapshots of what was happening at the time. Anyway, so those are the two events. That's the 0.016 downtime of the Bitcoin network. In both cases no money was lost except for the 184 billion dollars that were created in 2010, which of course were not real because the understanding that everyone had the consensus rules was violated. It's an interesting concept of what happens when people think they know what the consensus rules are or how they will be applied and then they're applied differently. It's not that different from what happened with the DAO hack in Ethereum much later. The big difference of course is that this was in the core software of Bitcoin and in fact in one of the most critical components of the core software in 2010, which was the issuance of new currency and checking that currency is issued at the correct schedule. Whereas the DAO hack was a separate smart contract that only affected a small part of the network, but very similar kind of response by the community. And then 2013 it was the realization that unanticipated bugs even in the database layer or other layers of the system that are not part of the consensus critical code can have an effect on the operation of consensus even though the rules are followed accurately by all nodes. So a block with more than 1024 transactions were perfectly valid by the rules of consensus, but none of the old clients running Berkeley DB could actually handle it because of a bug in Berkeley DB and as a result causing this chain fork.