 Roberto asks, reviewing the material about mining on slide 20 and 21, the last step refers to the hash being reviewed against the desired pattern in order to arrive to the prize, the reward. How is this desired pattern defined and generated in the decentralized platform? Is the desired pattern somehow centralized and broadcast to each node? What are the inputs to build this desired pattern? Is there a new pattern created whenever a new block has been created, and does that change per block? Great questions. This is an area that many of our students find confusing, to say the least. It is the topic of mining, and it's not that easy to understand. Let's get our terms correct, first of all. The desired pattern is called the target, and the target defines the difficulty. If a miner achieves, through the proof-of-work algorithm, a result that is less than the target, they are eligible to receive reward if the rest of the information in the block and transactions are valid, and can be validated through the consensus rules by everybody else. How is the target defined? How does it change? This is a great question, and it's an area of great confusion. The target is basically a number, and that number has to be greater than the hash of the block. It is simply a greater than, less than, operator that is being used to compare against this desired pattern. The miners are mining by hashing the header of each block, and the hash they are producing, which looks like a long string of hexadecimal digits. That is usually how it is presented, hexadecimal digits. It is essentially just a number. If you think of the hash as a number, then the target is another number. The hash of a block has to be less than the target. One way I like to think of this is that the target is like a limbo bar. If you have ever seen limbo, it is where you have to dance and pass underneath a bar. The lower the bar gets, the harder it is to pass underneath it for every limbo dancer. If the target is lowered, it is actually harder to find a number that is smaller than the target. Every time the target gets lower, the difficulty becomes greater, because it is harder to find a number that fits underneath the target. That is basically the process by which the target is compared to the block hash. The target number is a number that defines the difficulty of the proof-of-work mining algorithm. If you see the target number, what you notice immediately is that the first few digits are zeros. That is because, while the number started as a very high number back in 2009, when Satoshi Nakamoto mined the very first block, that number has now become billions of times smaller, and that makes the calculation billions of times more difficult. As it becomes a smaller number, that means that the leading digits of that number are zeros. If you think of, for example, a big number, a million, what is smaller than that? Well, it is 999,999 is smaller than a million, but that can be written as 0999999. This is zero in the front. What is smaller than that? Well, 1,000 is smaller than that. That can be written as 0,0,0, 1,000. That is smaller than a million. As you can see, as we go down, there are a lot of zeros at the beginning of the number. Finding a number that is even smaller than that target gets more difficult the smaller the target. The hashing process that miners conduct is random. It is a process by which they use a random number, produce a hash, and what comes out of the cryptographic hash function is a number that appears to be random. You cannot predict what the number will be. How do you know if it is smaller? You cannot predict whether it will be smaller than the target. In order to find a number that is smaller than the target, you just have to keep trying, again and again, pulling out these random numbers from the cryptographic hash function until one of them, just by sheer chance, is smaller than the target. The lower the target, the more hashing you have to do before you find one that is smaller than the target. That is the process by which the mining reward is allocated through the proof-of-work algorithm. Going back to Roberta's question, is the desired pattern somehow centralized and broadcast to each node? No. Each node independently calculates what the target should be and adjusts it. It started with a specific number that was hard-coded in the software in January of 2009, with a Genesis block. Since then, every two weeks we have a retargeting, as it is called. Every 2016 block, which is approximately two weeks, on the 2016 block, every node in the network calculates a new target for that next block. They do blocks 1 through 2015, and then they say, okay, the next block is going to be block 2016. Therefore, we have completed one retargeting period. We need to re-calculate independently what the target should be for block 2016 before it is mined. What should that be? Let's look at the previous blocks and say, how long did those take? We look at the previous 2016 blocks and say, it should take 20,160 minutes, because it is 10 minutes per block. If we count how long it took to mined the previous 2016 blocks, and we find that it was less than 20,160 minutes, that means we were finding blocks faster than we should be finding blocks, which means the difficulty was not as high as it needs to be. It was too easy. As a result, the target needs to be lowered proportionally in order to make it more difficult. If, on the other hand, we are finding blocks, and it is taking longer than 20,160 minutes to find 2016 blocks, that means it is too difficult, and we are finding blocks too slowly. The target is adjusted up to make it easier to find something that is less than the target, and that is proportionally. If you take that formula, the formula is to proportionally adjust the target up or down by the ratio of how long it took to find the blocks, over how long it should take to find the blocks, which is 20,160 minutes. That proportionate adjustment is the same for every node. Even though the nodes are not communicating or coordinating in any way, they can all count how long it took to find the previous 2016 blocks, and that is the same number across all of the nodes, because they count it by looking at the time in the headers of the blocks. They can also all divide that number by 20,160 minutes, which is the same number. They are going to arrive at the same exact result, and if they multiply proportionally the target by that number, they are going to get a new target. All of the nodes in the network, having calculated the same equation with the same inputs, will arrive at the same conclusion. They will independently figure out what the target should be for the next block, the 2017 block in the series. Then, 2016 blocks later, they will do it again. They will put the same inputs in the equation, and they will get the same retargeting value across the entire network. Even though there is no synchronization, because they are using the same inputs, they all arrive at exactly the same conclusion. That becomes the consensus target. Even if another node is lying and says they found a valid block, since all of the nodes know what the target should be for this retargeting period, they will all check against what that target should be, and they will only accept a block that has actually been mined to that specification, where the block hash is less than the target. To answer the second question, what are the inputs to build this desired pattern? It is the number of minutes it took to mine the previous 2016 blocks, divided by the expected number of minutes, which is 20,160. The next question by Roberto was, is there a new pattern created whenever a new block has been created? Does it change every block? No, it changes every retargeting period, which is every 2016 blocks, or approximately two weeks. Christoph also asks, is 2016 blocks still an optimal adjustment frequency, considering the current volatility, or should it be more frequent? This is an ongoing debate, which a lot of developers in the Bitcoin community have from time to time. There have been many different suggestions for changing the difficulty retargeting algorithm and making it perhaps a bit more nimble. There are some disadvantages to making it more frequent, because then it can get a whiplash effect, where short-term fluctuations affect the difficulty, which can cause more short-term fluctuations, which can actually increase volatility. By doing it every two weeks, that actually results in reducing volatility, because it acts as a damper. Some developers have suggested various more sophisticated algorithms than simply a moving average. For example, using some kind of proportionate, integrative derivative controller, a pit controller, a feedback mechanism, using a different window for a moving average, a bit like the way cruise control works on your car. There are advantages and disadvantages to every different proposal. None of them have progressed. Keep in mind, a change like that would be a hard fork. It would require changing all of the software in the ecosystem in order to remain in consensus. It would require massive coordination. It is the kind of thing that might be considered if there was, together with other changes, a change in the format of the block header, for example, in order to do some other big upgrade as a hard fork. Some of the recommendations for hard fork planning include a difficulty adjustment algorithm change. Afanasios asks, what happens when the mining rate drops even lower, and it makes no financial sense for miners, and eventually everyone drops out of the mining process? The trick here is that when difficulty changes or when profitability changes, it doesn't affect all miners the same. There are thousands and thousands of miners out there, and they are operating with a fairly broad variety of hashing equipment, electricity prices, operating costs, labor costs, utility costs, real estate costs, etc., all of which determine their profitability. Think of it as a range. Somewhere in there are miners who are operating on the absolute, very latest, most efficient ASICs that are installed in the most efficient way possible and managed in an environment where real estate costs are dirt, cheap, and electricity flows almost free, and labor costs are absolutely minimum. Those miners are going to be wildly profitable at the current difficulty, because they are not the average. Meanwhile, on the other end of the scale, there are people who are operating with previous generation chips in a place where electricity is more expensive, where real estate costs are more expensive, labor costs are more expensive, etc. They are not going to be profitable. Average profitability is between those two. If average profitability changes, that is like a moving bar, and more miners will fall below the threshold where it becomes profitable. That means that the least profitable of miners will abandon the field, and they will be replaced by more efficient miners, with more efficient equipment in more efficient locations. Eventually, everyone drops out is not something that happens. If more and more people drop out, then the difficulty goes down. When the difficulty goes down, it becomes more profitable again for people who just dropped out, so they don't drop out. It's a self-adjusting process. The fewer people doing it, the easier it gets, the more people doing it, the harder it gets. That means there is always someone making a profit in this environment, but not everyone is making a profit.