 Even with Lightning Network, Bitcoin will need a hard fork to increase the block size. Yes? Is that conversation happening? It's likely, and I've said before, that I think we're going to see all different approaches used for scalability. That means second-layer technologies. It means optimization of how transactions are produced and stored. It means various forms of compression, if you like to call it that, such as signature aggregation and various other things like that. All of these things are needed, and among those is an increase in the block size limit. At the moment, the Bitcoin Core roadmap is to prioritize optimizations such as mass, snore, signature, and signature aggregation over a block size hard fork increase. But that doesn't mean that there's no research going on there. In fact, there's quite a bit of research. There are a number of proposals, and you can find these on the bitcoincore.org website. There's a section on hard fork research that has quite a few things going on. There are a couple of proposals by Johnson Lau, if I remember correctly, called SpoonNet, which have now been evolved to SpoonNet 2 and SpoonNet 3. It's a funny name, but it's meant to be an alternative to fork. The idea behind SpoonNet is to do a hard fork in such a way as to change the structure of the block header, in order to implement a couple of very, very important features, including the ability to add more commitments into the block header, in addition to the Merkle route of the transaction tree. One example of that would be to incorporate the Merkle route of the witness tree into the block header explicitly, rather than sticking it in the coin-based transaction. Other recommendations are to increase the space available for the nonce, which would make the manufacturer of mining equipment easier and allow for some better scalability, and avoid some clutches that are being used in order to expand the non-space. Part of the idea here is a hard fork is a fairly big upgrade. It requires a lot of preparation in order to prepare all of the companies that participate in the space, and have software that expects a specific format of the block. You need to do a lot of preparation to change things like that, otherwise it can lead to various problems, bugs, unanticipated consequences. The research into hard forks is more about, if we're going to do all that, we should do a few more things than simply increasing the base capacity. The reason for that is it will take a lot of time to prepare any way it will require software changes, any way why not also implement some changes that are absolutely important and necessary that also require a hard fork. You have to do a balance there between doing just the minimally viable single-time one-off hard fork, which is unlikely to give you much of a boost in capacity and will have to be repeated, versus thinking a bit more long-term and doing a few more things, versus throwing everything in the kitchen sink into a single hard fork, which increases the risk tremendously. That balance is subtle. The conversations and the research going on about that are ongoing. I don't expect we will see a hard fork in 2018, at least not on the core roadmap, because I think we will see sufficient advancement using masts, nor signatures, signature aggregation, and a lightning network, to not need one in order to increase the capacity of the network. We will see what happens after that. Wouldn't it be a better solution to raise the block size limit, like Bitcoin Cash did, rather than use segregated witness, which will add more data between 38 and 47 bytes in the coin-based transactions? Ricardo, the solution that is created by a segregated witness isn't just about scaling. While raising the block size limit, like Bitcoin Cash did, creates more capacity on each block, and therefore attempts to solve the scaling issue in a different way. It doesn't fix transaction malleability, which was really the primary reason for segregated witness. Segregated witness is first and foremost a transaction malleability fix, and secondarily a scaling fix. Bitcoin Cash's increase of the block size limit hasn't solved transaction malleability, which means you can't build complicated and sophisticated smart contracts on top of Bitcoin Cash, because those are not secure because of transaction malleability. You can't do things like payment channels that are open-ended and persist forever. The current implementation of a lightning network, for example, can't run. Bitcoin Cash has a plan to fix transaction malleability as well. On the Bitcoin Cash side, they decided to fix scalability first, and transaction malleability second. The segregated witness approach was an approach that fixed transaction malleability first, did a bit of scaling, and anticipated a future block size increase and change in the block header format, as part of a hard fork, for more scalability increases in the future. It was simply a difference in the order in which things have been done. Whether it's a better solution or not, that's really up to you to decide. You get to choose. If you think that what Bitcoin Cash did was a better solution, then you can use Bitcoin Cash. If you think that what Bitcoin did with segregated witness is a better solution, then you can use Bitcoin. In fact, you can use both. You can use both Bitcoin Cash and Bitcoin, Ethereum and Litecoin, Monero and ZCash, and all of the other systems, and even better, because of the Lightning network, will soon be seeing the possibility of routing payments across all of these blockchains simultaneously, so that you can send the payment in Bitcoin, and it can arrive at the other end in Litecoin, or Ethereum, or something like that. This is about choice. You have the choice to use whatever you think is the best approach to scaling.