 Becker asks, downloading the blockchain is a beginner's question. Very good. Thank you for asking beginner's questions. That's really, really useful. A lot of people who need the basic information of beginner's questions are perfect for this Q&A. Becker asks, why does it take so long to download the blockchain? I do have a fast internet connection, and I could download 200 gigabytes in less than an hour. What Becker is talking about is what's called the initial blockchain download, or IBD, which is the first synchronization of a Bitcoin node or any kind of blockchain node to its blockchain. The answer is that while the amount of data you need to download in order to get the full blockchain is about 200 gigs or so, you're not simply downloading that and storing it on disk. One of the fundamental functions of a Bitcoin node is to validate all of the rules of consensus, and your node does that. It does that even if you're not trying to do a full sync of the blockchain. Every node validates every rule. When you start from the Genesis block, you download the block zero, block one, block two, etc., and you start building the blockchain to get to today's complete blockchain and sync fully with the rest of the network, every block you download, you download all of the transactions in that block, and then your node goes through and it validates everything, all of the signatures, all of the spends, all of the amounts, all of the Coinbase reports, all of the fees. It recreates and reconstructs every soft fork and upgrade and change in the code, replicating the entire history from January 3rd 2009. It behaves like a node in 2009 for the first period of downloading the blockchain, and then as the rules change, it counts the votes in a soft fork and changes the rules in real time, and then evaluates the next block based on the new rules. It recalculates the difficulty and sees if the miners are missing the target for blocks that were mined in 2010. It evaluates every rule as if it is at that time downloading it for the first time. So, it simulates living in 2009 and then in 2010, etc., etc., all the way up to today, every bug, every fork, every change, and that takes more than just bandwidth. It takes CPU, it also takes a big amount of disk indexing, because if you think about it, in order to validate whether a transaction isn't double-spending or properly spending, it has to keep a UTXO set in memory. This UTXO set is going to use in order to validate if that amount was available for spending. It is called to index all of the UTXO. Transaction IDs, when your transaction refers to a previous transaction, it has to look it up by hash. It has to reconstruct the merkle roots of all of the blocks and keep all of the blockchain, the hash from the previous block, listed, and all of that. That's a lot of database indexing, so that's what's happening with your node. I would guess that your real problem here is not bandwidth on the network, but it's probably bandwidth to the hard drive, so capacity through to the hard drive, performance of the hard drive, as well as available memory. A recommended minimum configuration involves 4GB of RAM, and that's only if you have a relatively fast solid-state disk, like an SSD disk, because of all of the indexing and reading and writing from the database on the disk that will be happening. If you don't have a solid-state disk, then you need to do a lot more caching in RAM in order to compensate for the performance of an old mechanical hard drive. In that case, you might need 8GB or 16GB of RAM to compensate for that. I would guess that your bottleneck is disk.io, perhaps CPU, although that's less likely. If you're running it on a four-core modern processor, it shouldn't be a problem. If you're doing all of this on a Raspberry Pi with only 2GB of RAM, then I can see what your problem is. That's going to be all of the bottlenecks within the system, rather than your bandwidth.