 So this is joint work with George V.C.S. who's also at UMass Amherst, and this talk has really been motivated by a lot of the challenges that have been discussed in talks that have happened earlier at this conference. So just to take a subset of some of the long-term engineering challenges that have been brought up, these include, that motivate Bob Tell, this includes an accurate difficulty adjustment, right, because if we can't adjust difficulty correctly all the time, there's gonna be this uneven spacing of blocks, and uneven spacing of blocks is gonna hinder adoption in the sense of, it's gonna give this, I don't know, sheen of unreliability to the Bitcoin Cash Network. So we've also heard talks on zero confirmation, or certainly related to that is a low delay in confirmation, and low delay is gonna happen because you're waiting too long for the next block to come, or maybe you're waiting for a lot of blocks to come, you're waiting for six blocks to come before you're really sure about this confirmation being valid, and that all relates to the double spend attack. So in general, we also really just want a reliable, smooth transaction throughput, and a lot of the talks here have been about increasing the size of the blocks, right, and even if you got to a terabyte block that wouldn't help you with the interblock time, right, if you suddenly have to wait 30 minutes for the next block to come, who cares that it's a terabyte, you're still waiting for that block to arrive, okay, so all of this, although it hasn't been talked much at the conference yet, this is also related, it turns out, to resilience against denial of service attacks, and by that I really only mean selfish mining and eclipse attacks, okay, and so the thing is that even if conditions are perfect, and perhaps even optimistically so that maybe the mining power isn't changing at all, it's still the case that about seven times a day there's gonna be, say, 30 minutes or more between blocks, that's the ideal of nothing's even under attack, and these six block wait times are standard, so this is all related to what I'm gonna talk to you about today. The other motivation for talking about Bobtail, which I haven't even described yet, is some of the medium term planning that's been going on in Bitcoin Unlimited and Bitcoin ABC and others, so for example, it's not the use of Bobtail, but rather just the evaluation of Bobtail is under consideration by Bitcoin Unlimited to reduce interblock time variance, increase double spend resistance, improve DDA, and achieve better mining, so similarly, again, not Bobtail, but Bitcoin ABC has talked about improving difficulty adjustment, and so Bobtail itself is not a difficulty adjustment algorithm, but it improves the performance of any difficulty adjustment algorithm because if interblock time variance is low, if there's an even spacing between blocks, difficulty adjustment is easy, right? Okay, so actually, I also just wanna go back and take a moment to say I'm also part of the set of co-authors that are involved in Graphene, and just to tell you, we have a pull request out for Graphene. I'm not gonna talk about it today, but it's a complete working version and it has a unit and end-to-end test that pass Graphene blocks between locally running instances. What remains to be done in our pull request is really it's George's pull request, is performance tuning and test net deployment, but we're getting there, so I'll talk about that maybe at the next conference. Okay, so in this talk, I'm gonna remind you of how Bobtail operates or maybe even introduce it to you if you've never seen it before. I'm gonna quantify some of its improvements to Bitcoin Cash, theoretically, and then I'm gonna tell you about new developments that we've had with Bobtail and tell you some of the next steps that we have planned. So, why is there variance between blocks in Bitcoin Core, Bitcoin Cash, Ethereum, anything that does proof of work mining? That's, it's because of proof of work mining. So, all of the miners, what they're all doing is taking a header, putting the knots in there, and then taking the hash of it. And that hash is really sampling a number at random between zero and two to the 256 minus one. And so, we all know, right, that the first miner to get lucky and find a sample below the target value wins. So, here's some samples, nothing's really working out until A is the one that gets lucky and they win the block. So, sometimes a miner will get lucky and that's when the blocks come early. And sometimes everyone collectively is unlucky and that's when the blocks come late. Okay, and so, this is a statistical phenomenon. This is the distribution of how often blocks come. And you can see at the bottom, I have the scale for Bitcoin. It's literally the same distribution for Ethereum. You just have to rewrite the scale at the top. It's like Celsius and Fahrenheit. So, 30 or, yeah, 5% of the time, it's at least 30 minutes between blocks. And they're sort of like a middle waste, right, between 80% of the time, it's between one and 23 minutes. So, one thing I want you to take away from this talk, as I sort of alluded to before, is that the size of the block is kind of analogous to network bandwidth, right? If you want to get more throughput between Tokyo and San Francisco, then lay down more fiber. If you want to get more transactions, then make the block size bigger. But that doesn't change the speed of light, the fiber, right, you still have a propagation delay. And so, the analogy isn't quite perfect because I'm really talking about the variance in delay or delay jitter, sometimes it's called. But that's the parameter that I want to talk about in this talk. It's not making the blocks bigger, it's not gonna fix the delay or the delay variance, it's adjusting a different parameter in Bitcoin. So, I'm not gonna go into it in this talk, but if you look at a talk online from Bitcoin Scaling, I explained why high variance is actually the cause of, or at least the exacerbator, of double spend and selfish mining attacks, okay? So, how do you reduce this interblock variance? So, in Bobtail, this is what we do. If you think about Bitcoin today, I want you to think of it as we take samples, we ask the miners for headers whose hash is below a target and we take the first one and only one. And you can average a single number, right? It's just that number. So, what if you were to average instead K samples and if that average was below a target, that would still be in line with the same sort of proof of work mining behavior. So, let's say that K equals four. What we would do is adjust the target slightly because mining is a little bit harder, we wanna make the target easier. And then we wait for headers to come in from the miners. Now, these aren't blocks, it's almost like a weak block, right? So, here we wait until we have at least four. We take their average and we say, oh, we don't have a block yet. The lowest four I've seen aren't below the target. So, we wait, the average is getting lower. Finally, we take the lowest four. There's a collection of miners here who have contributed to this now full block and you actually distribute rewards proportionally to the proofs, we call them proofs, these little sub-block headers. You get a reward proportional to the number of proofs that you have included in the new block. The miner with the lowest, the header that hashes to the lowest value is the one that produces the block and what they get to do is decide which transactions go in it, okay? So, it turns out that if you do some math, you can figure out the formula for adjusting the target as once you decide on the value of K, K is the number of proofs that you average together to form a block and the formula is not so important. I'm putting it up there just to show you that you could actually do this switch in a single block. Like, it could just be a sort of a hard fork where you make this change. And so, this is in a lot of ways basic applied statistics, right? The difficulty setting is not formally an estimator, but just think of it that way. If I told you that there's a bus that loops around part of Tokyo in a circle and I wanted you to tell me how often it goes around the circle, well, you can't do that easily from one observation of how long it took, but if I let you view it four or five or 10, maybe even 40 times, you'd have a very good estimate of the average time for that bus to go around. And when you create a block with a single proofa, with a single hash value as proof of work, that single value is not great information for estimating how much work has been done. It's pretty good information, as you can tell. We're all using Bitcoin. But if you included more values in estimating the amount of work, then your estimates would be better and the variance is lower. So it turns out if you do some more math, the variance of this new method is one over K. So in other words, if you take two values, the variance is half of what it is. If you take 40 values, the variance in interblock times is one 40th of what it is now. So there's different values of K you can choose from. Here you can see my original K equals one line that I put up earlier. If you chose K equals 40, then in the worst case, the worst 40% of blocks would come out 13 to 18 minutes after the previous block. And so actually, we're still keeping the average at 10 blocks here. Now the waste, the middle 80%, come out every seven to 12 minutes. So the average would be 10 minutes, but the interblock time would be between seven to 12 minutes, 80% of the time. Okay, so the interesting thing is this isn't just a convenience, right? Wouldn't it be great to have a blockchain where blocks come out very regularly, very dependable, dependably. But it actually thwarts a lot of different attacks and hardens the blockchain. So right now we all wait six confirmations for something to go through because we're worried about this double spend attack. Well, what if I told you you had an attacker with 40% of the mining power and I could just give you a one block confirmation, you'd say that's crazy. And the reason is this graph, it shows that a miner with 40% of the mining power could succeed at double spending 53% of the time. Those aren't very good odds, right? If you did bobtail this averaging of 40 block headers before you released a block, then the chances of that same attacker succeeding with double spend dropped to below 1%. So this is a significant improvement. If this was adopted in Bitcoin Cash, it would be the most secure blockchain algorithm out there by far. So similarly, oh sorry, that was the circle I didn't put up. Similarly, this approach really completely mitigates selfish mining. This is the probability of selfish, or rather the proportion of blocks that a selfish miner gets in Bitcoin Now. It's that red line right on top. And so if you're familiar with the paper, I'm setting gamma equal to one. In other words, the miner has a huge advantage here. So any amount of mining power leads to successful selfish mining in Bitcoin Now. But if you set K equal to say 20 or 40 or anything really above 20, the chances or rather the proportion of blocks that the selfish miner gets in bobtail is always below honest unless they have 40% of the mining power, 49% of the mining power, which really just more than decimates the selfish mining attack. Okay, so this is some stuff that I presented previously. What have we been doing in the meantime? Well, we've really been focusing on, I should say, I meant to say at the top, that this is a paper that we've not submitted for peer review yet. We're about to do that, but this is sort of just work in progress right now. But in the meantime, we've been investigating new attacks that bobtail suddenly allows for by its use. And so there's no attack we come up with yet that we haven't been able to solve. The most interesting one and the hardest one for us to solve has been withholding attacks. And it's sort of, one way to think of it is sort of an intra block selfish mining attack. Like we're all mining, but I'm keeping my proofs to myself and I'm not allowing you to use those. So we had a sort of complicated reward scheme and we found that a simpler one, well, it's not only simpler, because simpler's better, but it actually eliminates the threat of this attack. And so to remind you, the miners are gonna announce block headers, but not necessarily the transactions that go with it. We're calling those proofs and then a collection of proofs together will form a block if the average of the hash of those proofs is below the target. And then we're gonna include this requirement that proofs reference the smallest other proof they've seemed to date. What do I mean by reference? Well, every block references its prior, right? It includes the hash of the prior in its header. So we add this requirement that you add the hash of the lowest proof that you've seen. So the consequence of this is it ends up thwarting these attacks. By the way, the reference to the other proof is called the support. So each proof contains a support and some of the supports will actually reference the lowest proof that was ever created for that block. So in the end, miners are gonna be rewarded for the number of proofs that appear in a block and then they're gonna get a bonus when their support names the smallest proof in the blocks. And the bonus incentivizes everyone to announce all of their proofs. And let me explain how this works, but one other thing to notice is that sometimes there's more proofs, you have a lot of proofs and there's more than one way to construct a block. Miners, whoever's creating the block with the lowest proof available, they break ties among all the blocks that are available by favoring their own proofs. So we allow that, we encourage it in fact. So one thing you may be asking is like, well, is it still the case in Bob Tell that miners get rewards that are proportional to the mining power? And if that's true, then the reward should be linear with the mining power and that's exactly what we see here. And so as I said, miners get rewarded based on the number of proofs that are included in the block and then they get a bonus reward for referencing the lowest one. Now the order of these proofs as they come out is random in the sense that they don't come out like smaller, smaller, and smaller, and smaller, and smaller, right? So the lowest one will come out on average halfway through. And so if you're being honest, about half of your proofs that made it into the block should reference the lowest one. And so that's exactly what we see here in this simulation I wrote of the whole system. So things are proportional, which is great. Now the problem is that the miners, as I said before, can withhold proofs so that no one else can reference them. And so this is a kind of denial of service attack. It's really selfish mining. And so if you do this, as I show in the results of the simulation here, the blue line, which are the honest miners, really lose some bonus rewards that they would have gotten. So that's a big problem. We don't really want to allow for that. In fact, it's a little worse than I just showed in the last one because in the last one, just to make it clear, the selfish miners, the intra-block selfish miners, weren't even prioritizing their own proofs. But if they do that, it gets even worse. And now they can even, the red line there is the attacker. They can increase the number of proofs they get while hurting the honest miners. So this seems like a disaster, but in fact it's fixable with a simple change. So we enhance the protocol in a very small way. We say that miners don't accept blocks mined by someone else when they don't include a lowest proof that they know about. That's number one. And the other thing is that miners simply accept new proofs that they hear in a first in, first out sort of way. And I don't mean according to the timestamp in the header, just as they see them locally coming off the network. And what this thwarts is someone withholding and then the last minute saying, no, no, no, I got all these, right? Here you go. And so they won't actually, the attacker in that sense won't get their proofs or bonuses into the system. So basically if there's like a batch of proofs that suddenly come out from an attacker, the honest miners just try to create a block one by one in the order they receive them prioritizing their own proofs. And that actually not only hurts the attacker, it actually gives the kind of extra money to the honest. So now this withholding attack is basically economically disincentivized, right? So basically honest behavior will get you the most rewards in the Bob Tell system. Okay, so what I wanna say is that this is actually a generalization of Satoshi's ideas and not a break from it, right? We're just increasing the number of values that are compared against the target, right? Rather than just saying the single lowest value, we take the mean of several values and you can think of the block sizes being a parameter that can be expanded or you can think about the number of proofs that get averaged before there's a block as something that can be expanded. As I said, the target can be adjusted in one block so it's not like you can just, this is a hard fork but you can just agree on a particular block where things change over for a particular K. It's compatible with existing ASIC hardware. I didn't show slides on why but I did present that at the scaling conference and what I said is absolutely true. As I said already, the rewards are proportional to mining power so it doesn't cause any problems there. The real costs are that you have to include extra information in the header but I'm talking about hundreds of bytes or maybe a couple K at the most. Compared to the block size, I mean this is nothing. I mean going from eight megabytes to 32 is a ridiculous amount more data than what I'm talking about adding to the header. There's actually some extra traffic because you announce proofs as they happen but it's again nothing compared to the actual transaction traffic in Bitcoin Cash. Certainly the extra transaction traffic that would come from larger blocks and of course there's new code to write and testing of that code, that's a cost but we're ready to do that. I mean we ourselves are ready to do that and part of the reason we want it to be the ones to implement graphene was to show the community that we're ready to contribute so we'd love to do that for Bob Tells as well and work with others to do that. Okay so conclusions are just difficulty adjustment so these are really almost more consequences than conclusions so if you have low variance interblock times, if blocks come out easily then any difficulty algorithm will be more accurate. Additionally as I said already one block verification is more secure and then to link back to the other talks that were presented you can think of these proofs that I'm talking about is actually just weak blocks, right? They're just block headers. They could be the block headers and the transactions that went into that block or not but if you include the transactions that were mined that the miners doing before they present their proof then that is weak blocks and you could actually see which miners are including which blocks in their forthcoming blocks and that announcement of which transactions you're doing would actually be very efficiently done with graphene and then furthermore all of this information is more information for the difficulty adjustment and another thing is that with these proofs you could actually estimate the amount of mining power that any individual mining pool has at the moment so you could say I'm gonna buy this $100 maybe $800 stereo right and then you could see what percentage of the mining power is actually working on that particular transaction not just from their history but from the actual proof value there's ways to convert the hash of a header into an estimation of hash power. Okay. And finally as I said it protects against selfish mining attacks, eclipse attacks who've shown that small tweaks to the scheme prevent against what I'll call intra-block selfish mining or intra-block withholding attacks and the end result is that transaction throughput is even and reliably on time which has great user adoption benefits, right? People would see it as a very even-handed, very nice reliable currency. So that's about it. I'll take any questions you have. My contact information is there. There is a PDF of Bobtail on Archive. The work I just presented here is not included but soon we'll update that to make sure it's part of the PDF. So I'll take any questions. Thanks. Thank you, Brian. All right. Okay, questions. You gotta forgive me because I'm the runner guy. All right. Go from here. Hey, thanks, Brian. One quick question. You talk about rewards for these proofs. Can you just talk more about it? Where is this gonna come from? How, you know, any drawbacks and everything like that? So George has this great quote that I love to quote and he says that all blockchains are just orchestrated incentives, right? I mean, that's why they work. You want, and someone else said it in a different way that you want all honest behavior to be just naturally incentivized. And so to us, the structure of the rewards is what makes the protocol secure. So, I mean, that's where it really comes from. I'm not sure that's what you're asking, but. Yeah, I was thinking. Yeah, so for instance, if like right now the Coinbase of course is 12.5, right? And in this slide right here, I'm saying there's 40 proofs that together, the hash of those proofs are averaged together to see if they're below the target. So if you're a miner with 20% of the mining power, you'll get on average eight of the 40 proofs. So you will get eight 40ths of the reward, right? Now additionally, it's a little more complicated. You would get eight 40ths plus, you would get four 40ths on the bonus side. So you would get there's, anyway, the formula is basically the number of proofs you have plus the number of bonuses. If you're following this, sorry if that's confusing, one consequence is actually that the Coinbase would actually be a little bit variable per block with Bobtail, right? You wouldn't necessarily give out 12.5 because the number of bonuses can change per block. So I don't think this is a bad thing. Like you could take the extra and put it in a pool and there's other nice consequences, other protocols that are available because there would be a pool of unspent Coinbase. It would help with other problems that I've heard. So it's a detail I didn't really have time to go into. But the short answer to your question is you get a reward based on the number of proofs you have and the number of blocks, the number of proofs that you, the number of proofs you have that reference the lowest proof, that's the bonus. So, okay, other questions? Yeah, hi, so it's really interesting. I, thanks for the talk. Thanks for the talk and thanks for this explanation. I am still getting into like learning about Bobtail. I think it can be combined with big blocks. What I'm wondering about and maybe you can like talk about this is if you don't receive the transactions for one of the headers, what do you do then? And also like in general, how do you combine the transactions for the different proofs that you generate? Okay, so strictly speaking, you don't need to know the transactions of the proofs that you're joining together. You just need to know the Merkle route. You just need to know the proof header, right? So what happens is every miner grabs together the transactions that they want to mine and then they make a header out of that and then they hash it, I'm missing some details here, but they hash it and then if it's below K times the target, then it's a valid proof. I'm not explaining how I got K times target, but it's in the paper. So if you just have, so let's say you do this and you're a miner and you come up with a proof that's below K times the target, you would send just the header out over the network. Now I do that and I've got a collection of proofs and I gather all the ones I need and I take all 40 and that is the new block and I just need to know the transactions that I know about. So that's the basic Bob tail scheme, but this is compatible with the other talks where people talked about weep blocks and knowing what the other miners are doing and that. So that's a sort of just to show that this is synergistic with other ideas, but strictly speaking, it's a standalone idea as well. All right, we have time for one more question. First of all, thanks for that, can you hear me? Yep. Thanks for this graphing Bob tail, amazing. So how does this interact with, maybe compare with the ghost protocol or like proposals to cut the target block time and the block reward in half? Seems like this has advantages. So ghost, you mean like Ethereum's ghost or a view of Zora's ghost? So we don't, if you did Bob tail, it doesn't change the orphan rate of Bitcoin cash or Bitcoin period. I mean it would be, you don't need ghost basically. It doesn't cause any problems that cause you to require ghost to come to the rescue. You could, so your other question is, what if you cut the block, the interblock target down from say 10 minutes to, you could have four blocks every 10 minutes and so you would just use Bob tail and you would really have four blocks every, you would really have a block every 2.5 minutes on average which would have a much smaller variance. So it's completely compatible with that. Yeah, so no problems there. All right, thank you very much, Brian. Thanks.