 Hi, everybody. Thank you so much for joining us for today's Protocol Labs Research Seminar. Today we have two wonderful speakers, Yochai and Sirvastan, who are both at Sanford University, working under Dr. David Sight on Internet Scale Open Participation Consensus, or aka the Technical Foundations of Blockchains. Today they are here to present their work titled, Securing Proof of Stake Nakamoto Consensus under Bandwidth Constraint. So I'm going to let you both take it from here and expand on that topic a bit more. So thank you so much for joining us today. Yeah, welcome, everyone. We're talking today about Securing Proof of Stake Nakamoto Consensus under Bandwidth Constraint. This is joint work with Sirvastan, who will talk in the second half of the talk. Lei, who is also in the audience, and our advisors, David and Mohamed. If you're curious after the talk, we have a preprint, which is up on archive. You find the link here in the bottom. And yeah, this will be presented later in the year at the Science of Blockchain Conference. So let's get started. I'm going to spoiler the full talk in one slide. So here are the 60-second version of what you're about to see in the next half an hour or so. We're going to revisit the bounded-delay network model, in particular from the perspective of equivocations. So that is multiple blocks being produced for the same block production opportunity in proof of stake. And this gives the adversary an opportunity to spam the network with a whole ton of blocks and puts a question mark behind the bounded-delay network model. Because of this, we're proposing a new network model, a bandwidth-constraint network model. This network model gives us a new degree of freedom in designing our consensus protocols, because now there's a new functionality, so to speak, which are download rules. And the protocol has to specify what download rule it uses in order to fetch blocks from the network under this bandwidth-constraint. We then go on to show that the popular longest-chain consensus protocol of Nakamoto under the canonical download the longest-heter chain download rule is insecure. And then we present to you our simple replacement download rule, which we call the download-freshest-block rule. We will see later what it refers to. And we will prove that under this download rule, the protocol is actually secure. So before we get started, or as we get started, let's do a quick recap of proof-of-stake longest-chain type protocols. This is a family of protocols. It started with a proof-of-work longest chain by Nakamoto in Bitcoin. And it has since been translated to the proof of stake setting, for example, in Sleepy or in AuraBars. You see here my network with the participants, and they're growing a blockchain on the left. In this setup, we have a notion of time, and time is divided into time slots. And nodes basically, so as time progresses and we step through the time slots, nodes take turns in producing blocks. There could be time slots in which there is a block produced. There could be time slots that are empty. And nodes take turn in producing blocks. How exactly does this work? So let's look into what some nodes think as they are in a time slot and they're trying to produce a block. So here we're looking into the head of two of these nodes, and they're asking us, hey, am I eligible to produce a block here? And in order to resolve that question, what they do is they all have a secret key corresponding to their node identity, and they evaluate a verifiable random function on the current time slot. And depending on what the output is, when the output is smaller than a certain threshold, then that means that they're eligible to produce a block or not. So in this case, only the node here that is proposing transaction set three is eligible to produce a block. The other node is not. Okay, so there's a block produced. It can also happen that in some time slots, two nodes are eligible to produce a block. That's okay. Then we see a fork in our blockchain, so two blocks at the same time. These forks get resolved in later slots when there is only a single block, and that single block then picks one of the forks of equal length and puts a block there, and then this becomes the longest chain again. Now that we have our blockchain structure, the last thing, so I explained to you nodes produce blocks on top of the longest chain. The only thing I need to explain to you still is how do we extract a ledger from this? There's a consensus protocol after all. The way it works is using the T-Deep rule, very similar to the K-Deep rule in Bitcoin, where you take the longest chain, you chop off the blocks at the end that come from the most recent time slots, and then you confirm all the blocks that are left in the prefix. Okay. And the last thing that is always good to keep in mind in this setting is we have adversaries in this network. We don't know who they are, but we denote the adversarial stake fraction with beta as in bad. Okay, those are the bad guys. Cool. Now that we have a recapped proof of stake longest chain protocol, let's also revisit the security guarantee half of them, which is typically these protocols are analyzed in what we could call the bounded delay model. So in the bounded delay model, it is assumed that the network delay between any two honest nodes is under control of the adversary, but it has to be within some known delay or profound capital delta. And under this network model, we can then get a security theorem, which says something to the effect. Well, if the network delay is bounded by this capital delta, which is a parameter that we want to know because we want to tune the protocol to it, and the adversarial stake fraction beta and the block production rate lambda satisfy this complicated looking expression here. We're going to talk about later where this comes from. Then Nakamoto's longest chain consensus protocol satisfies safety and liveness with overwhelming probability. And safety and liveness, again, just as a quick reminder, safety means that so every node outputs a ledger, right? And safety means that these ledgers that are output by two honest nodes at two points in time are consistent. So one is a prefix of the other. They may not contain different transactions at a certain position. And liveness means that every transaction that is input to an honest node enters every honest nodes ledger within reasonable amount of time. The big question, of course, is how true is this networking assumption, right? In particular, you see from the bounded delay model here, there is no notion of capacity. There is no notion of maximal throughput. There is no criterion here to the effect as long as you don't send more than a million messages per second than this works. And it stands to reason that this might not be the case in real networks. So this delay bound here is independent of the network load. But if you run some experiments, so here we took the Cardano implementation in some tiny testnet and we varied the block production rate so that the network load is increasing. So on the x-axis, you see the number of blocks that we're sending per slot. And on the y-axis, you see the resulting block delay. And you can tell that the more blocks you're sending, the larger your delay gets. And maybe that's not all too surprising after all these are physical systems and they don't have infinite capacity. So this is sad. It shows that congestion matters. You need to somehow capture bandwidth constraints and you need to somehow talk about these limits. This is particularly aggravated in the case of proof of stake because of equivocation spamming. So to see why that is the case, let's revisit this note that we looked into their head earlier. And this note is trying to produce a block and in order to do this, the node checks its VRF output. And if the VRF output is below a threshold, then it gets to produce a block. So this block is made up from two parts, from a header and from content, block content. The header includes a bunch of metadata. It's typically relatively small. It includes the nodes identity i. It includes the current time slot or the time slot in which the lottery was won. It includes a VRF proof pi. It includes the parent, a reference to the parent block. It includes a hash of the transactions that are supposed to go into this block. And there's also some signature to bind it all together. And so this block header gets bound to a block content and the block content typically is much larger. It contains all the transactions that are included in this block. And so when this honest node is eligible to produce a block according to the VRF, then it composes this block, puts all the information together and sends it out to the network. Great. What if this node is adversarial? So if this node is adversarial, then it can actually reuse. See, there are no transactions in the block production lottery. There are no transactions input to the VRF. That's intentional because otherwise you could change the transactions and get more lottery tickets and more attempts at producing a block. So you can grind on the transactions. In order to avoid this, the transactions are not part of the VRF input. But as a result of this, if a node wins block production opportunity, then it can put out a whole bunch of blocks all with different transactions. And it can send, all of these are different blocks and they're all offered to the network. And suddenly we have a huge amount of blocks that are being offered to the network because of this adversarial strategy. And we've seen earlier that this puts a question mark behind the delayed bound. So equivocation spamming really aggravates this problem of congestion. And it prompts us to look more into bandwidth limits and congestion and their effect on security. Note also that this is not a problem in proof of work. This is a genuine to proof of stake because in proof of work, the difficulty is calibrated in such a way that you get a global cap on the rate at which blocks are produced. So it is not the case that if an adversary gets to produce one block, then the adversary can equivocate and produce a whole bunch of blocks. And so in proof of work, that's not a problem. In proof of stake, this arises. So our goal in this work is to introduce a formal network model that is supposed to be relatively close to the bounded delay model. But we would like to be able for it to capture the performance under network congestion and in particular to be able to reason about the security of protocols under attacks such as this one that is based on equivocation spamming in the proof of stake setting. And we should note that this problem has been observed before. For example, in this paper reference here above, proof of stake blockchain protocols with near optimal throughput, where the authors realized that in order to talk about throughput in a meaningful way, you need a model that somehow captures congestion. But once you have a model that somehow captures congestion, then in a proof of stake setting, you're faced with these equivocation spamming-based attacks. And the authors in that paper are giving us a conditional security statement that basically rules out these equivocation spamming attacks. And this is kind of the open question that we're trying to provide an answer to in this work. So that is the first thing that we're putting forth is a formal network model that can capture congestion and hence allows us to reason about security under equivocations in proof of stake. The second thing is that we're looking here for a protocol that is still somewhat close to the longest chain family. And we're looking for a simple modification or for a simple protocol that works well with the longest chain paradigm simply because it's a prominent family of protocols. And we would like to be able to reason about how it behaves in this setting. And the third point is we're looking here for provable security. So we are putting forth a formal model and a formal analysis. And you might disagree with us, but we think that's actually good. So now we have a model that we can talk about and that we can pull apart and discuss what we like or do not like about it. This is in contrast to a bunch of heuristics that are circulating mostly in implementations of blockchains where people drop equivocations. There's all sorts of heuristics how to handle equivocations. But to our knowledge, there is no formal analysis on these. We simply don't know whether they work or not. So here we're specifically interested in provable security. With that being said, let's dive into the network model. So you see here on the right hand side our network. We have our nodes. They're talking to each other via peer-to-peer gossip network protocol. And the process that works is like this. So if a node produces a block, it submits it to the gossip network. The headers, remember, the headers are very short pieces of information. They are then propagated with a bounded delay of delta H. So they go out to everybody. And then based on the block headers that nodes have seen, they use a download priority rule to decide what block they would like to download now. So block content is what dominates the download bandwidth. So then they request to download a certain block content from the network. And this block content is then provided to them from the network. But the content is subject to a bandwidth constraint of seed blocks per second, for example. And finally, the adversary has a special power just like in the bounded delay model where the actual network delay is under control of the adversary. Here, the adversary basically has control over the effective bandwidth at each node through being able to push headers and blocks to nodes and to override the bandwidth constraint that way. So yeah, coming from this model, the new degree of freedom we get in designing our consensus protocols is this download rule, which is some mapping from looking at a certain block header tree. What is the next block that the node wants to download, wants to request from the network? This is now something that we get to design for our protocol. A canonical rule for a longest chain protocol would be to download, to look at the header tree, determine the longest chain in the header tree, and then to download whatever blocks are missing towards the longest header chain. So we call this the longest header chain download rule. And it is canonical in the sense that Bitcoin uses this rule basically, Cardano uses this rule basically, and it seems to work well with the longest chain paradigm. However, there are some caveats, there are some problems with it, and I'm going to show an attack now basically. So let's suppose we are starting out with these three blocks here, one, two, four. They're all downloaded. All honest nodes have already seen them and have downloaded the content. And let's suppose in slots six and seven, the adversary gets to produce some blocks. So the adversary has block production opportunities, but the adversary withholds their blocks. They are not being dispersed for now. Then in slot number nine, an honest guy gets to produce a block. It's block number nine here. The header is disseminated in the network. And just before honest guys are about to start downloading the content of this block, the adversary released this. It's two blocks, six and seven. So now by the longest header chain rule, right? I know that sees all these block headers, but hasn't downloaded anything up beyond the block number four. This node now prefers to download block six and seven over block nine simply because six and seven is the longer chain than the chain through nine. So nodes start downloading block number six. When it's done downloading block number six, it actually turns out that some of the transactions in that block were invalid. So that chain needs to be abandoned. That chain gets thrown out. The node would now want to go back and download block number nine, but actually the adversary releases a new equivocation for slot six and seven and puts out this new chain, six prime and seven prime. And hey, according to the longest header chain, this is now the longest chain, right? So instead of downloading block number nine, we should now be downloading six prime and seven prime. You might already guess where this is going. It unfortunately turns out that this block six prime is invalid. You can only tell after downloading the transactions because this is, you know, turns out the last transaction in the block is invalid, but you'll only know after spending your download bandwidth. Notice that in the whole process, the tip of the downloaded chain or the downloaded longest chain has not moved at all, right? In particular, nobody other than the node that produced block number nine has block number nine. So if in slot 11, a new block is produced, then this block gets produced on top of the downloaded longest chain, right? It would be dangerous to produce blocks on top of blocks that you haven't downloaded simply because, you know, you see the headers because what if these blocks turn out to be invalid and then your block also gets lost? So if there's another honest block in slot number 11, then that extends for again. So it kind of has the same fate as block number nine, right? Because of course, the adversary keeps pushing out equivocations using these opportunities six and seven. And yeah, it might not be all too disappointing, but also that block turns out to be invalid after you downloaded it. So you see roughly where this is going. This is a liveness violation under the protocol rules. And in fact, we implemented Cardano's download logic and we implemented this attack in small test net. And indeed, you can tell that, you know, if there is no attack, you get a certain honest chain growth rate. And if you launch this attack, then the chain growth rate, you know, breaks down to only a fraction. And if you're a little familiar with longest chain protocols, then this is a warning sign to you because the honest chain growth rate is kind of what makes these protocols secure. These protocols are based on the honest chain outgrowing any adversarial chain. But if the honest chain growth rate breaks down, then the adversary can start producing competing chains and can start de-confirming blocks. And in fact, that's the case. So this attack is not just a liveness violation, it's also a safety violation because the adversary now has, since the honest chain is not growing anymore, the adversary has all time in the world to go back in the chain and fork off earlier in time, produce longer chains and de-confirm blocks. So this also implies a safety violation. This mechanism also gives you a hint or maybe, you know, by now you're a little uncomfortable with the situation. You're like, somehow something seems odd, right? Because why eventually honest nodes should just give up on these slots six and seven, right? Why do they keep downloading blocks from six and seven, even though they've already been burned many times, right? There are many new blocks, fresher blocks that came more recently. Something seems to be wrong with these slots six and seven. Why do you keep downloading those? Why don't you go for more recent blocks? And indeed, this is kind of the basic intuition behind the rule, the download rule that we are proposing, which is that honest nodes should be downloading freshest blocks. And freshest blocks means the freshest block is the block with the most recent block production opportunity. And it's immediately clear that if the most recent block production opportunity was honest, then the freshest block is honest and honest nodes will be downloading towards this block. So somehow this gives an intuition that going following the freshest block might be a better thing to do. And indeed, it turns out that this freshest block download rule allows us to prove that the protocol is secure. So here's another illustration of this fact. So suppose you have this chain downloaded of one to four. Everybody has downloaded this chain. It so happens that the blocks of two and four were adversarial in this case, but that's fine if the adversary doesn't put invalid transactions in there. Honest guys will still download this. And now in slot number six, an honest block comes around. This honest block is built on top of copies of two and four. So the adversary has released copies with conflicting transactions. This time they're valid transactions. And the adversary perhaps has pushed these blocks to the node that produced block number six. But in any case, block number six is produced on this chain. And since this is now the freshest block, that's what honest guys are going to download towards. So they then go back from the freshest block according to the block headers to where this chain merges back with the longest downloaded chain or with the downloaded chain with downloaded blocks. And they start downloading whatever is missing in their prefix in terms of transactions. And later on, you will see that the proof critically hinges on the fact that this prefix, if it is not too long, then the honest nodes will succeed in downloading this freshest block within the same time slot that it is produced. Now you might say, something seems a little odd with this rule because what if the adversary gets to produce a block? Then the adversary can attach this block wherever it wants in the block tree. And according to the freshest block rule, you will still download it. It might even fork off far, far in the past. Why would you go download this? This is not going to be the longest chain. And that's true. And in particular, the adversary cannot just produce one block there. It can equivocate there. It can keep you busy. You will be downloading all these equivocating freshest blocks. And that is true. But that is not as much of a problem because as soon as the next honest block comes around again, it will extend the longest downloaded chain and will keep downloading there. And we keep moving forward the longest downloaded chain. And since the majority of blocks, hopefully, is honest, we're spending at least half of our bandwidth not subject to this adversarial spamming because the moment an honest, fresher block comes around, we stop downloading whatever the adversary is proposing us to download whatever the adversary has released. And we're only following this freshest block. And indeed, if we implement this, again, back in a test net, which basically follows Cardano's block synchronization logic, then we can see that under this download rule, the honest chain growth rate is basically restored to before the attack. Okay. So this gave you the intuition. Now I'm going to hand over to Srivatsan who is going to guide you through the proof. So I walk you through a bit about an outline of our security proof, how we achieve provable security here. So to recap how previous works deal with this, there's a bounded delay model. And with the assumption that there is a bounded delay for all the blocks, you can prove security. And this security is of the form of this theorem that we saw earlier, where if the network delay is bounded by some delta, and then the adversarial strike stake beta and the block production rate gamma satisfies certain relation, then this longest chain consensus protocol satisfies both safety and liveness. How do we get this sort of security? So we can look at it. So the honest nodes are building upon the longest chain like this. And since the delay can be at most delta, so blocks that are produced within delta interval of each other, they could end up on a fork like this. But whenever the block is produced after a delta time, then you're guaranteed that the second block producer has basically downloaded the first block. And so this must contribute to the growth of this honest chain. So similarly, like the honest nodes can continue growing its chain with occasional forks. So effectively, this chain is growing at a rate lambda H, which is so ideally, you would want one minus beta fraction of the total block production. But then now it's reduced by a factor which depends on this delay. And this is because of these occasional forks. Meanwhile, the adversary, we assume is all powerful and does not face problems due to network delay. Maybe it is centralized. And so the adversary continues to grow a chain at its ideal rate, which is a beta fraction of the total block production rate. And so these earlier proof techniques basically end up showing that with a bunch of modeling and math that all possible attacks on safety and liveness can be modeled as a race between these two block production processes, which is the honest blocks at this rate and the adversary blocks at this rate. And so we get a guarantee that as long as this adversary rate is smaller than the honest rate, which is what is summarized in this equation in the theorem, you have both safety and liveness. So what we did is we noticed that in this proof, you actually do not use an assumption that there is boundary delay for every single block. But what would be sufficient is that you have that enough fraction of the honest blocks. We will characterize what this enough means. If many of these honest blocks are downloaded within boundary delay, then sort of the honest chain continues to grow at a certain rate. And then we can get security as long as this honest chain growth outnumbers the adversarial blocks. And so here it's important to note that whenever you have to download an honest block, you actually have to download its entire prefix, because otherwise you are not able to validate the honest block. So that is our goal basically in this security proof, which is that we want to show this kind of a claim in bandwidth constraint model. So yeah, we want to show that there is a boundary delay for enough honest blocks, including their prefix in the bandwidth constraint model. And so for this, we basically define something called unique slots. So as we saw the block production process earlier, in many of the slots, there will be only one single honest block being proposed. And that we call as a unique slot. So let's say as an example here, one and six are unique slots. And so the useful part of our download priority rule, the freshest block rule, is that whenever there is a unique honest block, that block is the freshest block, at least for some time until the next block comes in. So as soon as this guy is produced, this is the freshest block. So let's take an example. Let's say the already downloaded chain was the chain at the bottom, one, two, four. And now you want to download the freshest block, which is block six. So as long as the prefix of this block is small enough, let's say some number C bar, which is a parameter we will choose. So if the prefix is small enough, then you should be able to download this in some bounded amount of time, which depends on your bandwidth. So to make this precise, we said delta H is the delay for receiving the headers. There's a bandwidth constraint, which is given by C blocks per second that you can download. So if you set your protocol's time slot to be equal to the header delay plus the time it takes to download C bar number of blocks, which is what is given by C bar divided by C. So if you set your time slot like this, you can receive all the headers and as well as download C bar number of blocks, which allows you to basically download the prefix, whole prefix of this honest block. And the proof technique we use shows that this prefix is not too large with overwhelming probability and hence you can actually end up downloading the honest blocks. So that is what we achieved, which was our goal in this model, that with the freshest block download rule, we get bounded delay for these unique honest blocks, which are a big fraction of the honest blocks at least. And this happens with overwhelming probability. So then once we get this kind of bounded delay argument in the bandwidth constraint model, we can reuse tools from earlier proofs in the bounded delay model and prove the security theorem. And the security theorem is of this form, like we defined kappa as the security parameter and we already saw beta as the fraction of adversarial stake. So the theorem says that the proof of stake long-stain protocol with the freshest block download rule, with the following parameters. So we, as I remarked, we would like to set the slot duration tau so that you can download C bar number of blocks. And you want to do this so that all the unique honest blocks are downloaded within the same time slot with very high probability. And so for this we required the time slot to be large enough intuitively and more precisely this is large proportional to the security parameter. And also the block production rate. So we said that the unique blocks, the unique slots you can download within the time slot. And so we would like that these unique slot honest blocks outnumber the adversity and so more precisely the number of unique slots is at least half of the total block producing slots. So this is similar to the honest majority assumption. So you want your block production rate to be slow so that you have enough unique slots to outnumber the rest of the slots. And finally we have this confirmation time we talked about in the TD rule that also has to be large enough here proportional to like the square of the security parameter. Well, so if you have these parameters that the slot duration is large enough to allow downloading enough blocks and you have enough confirmation time then you can achieve safety and liveness with this freshest block download rule in this bandwidth chain model that you propose with overwhelming probability and over polynomial in the security parameter length executions. Okay, so that proves that goes over the security proof, but we can talk a bit about the performance under bandwidth constraints. So here's an illustration of the bandwidth utilization of this protocol when under spamming attacks. So the red curve here shows the attack when you're implementing the longest header chain download rule. And here as you can see so the in the experimental set of the nodes are limited to 20 megabits per second bandwidth limit. And so in this attack basically overbends the bandwidth completely and the attack goes on forever one started as we actually saw in the example earlier. Move over to the freshest block download rule. You can see that actually these shaded blue segments which indicate what time periods the attack lasts is actually the attack only lasts for a small amount of time and then and then you stop. So how this happens is we bent over this whenever the freshest block is adversarial. You could be wasting a lot of bandwidth downloading their spamming blocks which is indicated by these peaks in the blue curve. Whereas whenever the freshest block is honest, you go back to downloading just the honest blocks which the honest nodes are not going to spam you. So your bandwidth utilization is very low. So on average if you look, we are actually using only a fraction of the bandwidth. So this talks about the efficiency of the freshest block rule also. So we can illustrate this bandwidth in utilization in like a in a bar here. So let's say the entire length of the bar represents your available bandwidth. Even in the worst case, just about half of this bandwidth is consumed because of this spamming due to equivocations. While at the same time, the green portion indicates the actual throughput which is how fast the ledger in the end grows. So why this throughput very small in particular it says it's proportional to one over the security parameter. This is because you recall that we said the time slots to be proportional to the security parameter. So time slots have to be long and hence the throughput kind of goes down. We expect that. Okay, this is a bit sad that you utilize half different bandwidth for dealing with spamming but only get very little throughput. We can improve this and for that we notice that okay, so throughput is low because there are only a few block production opportunities. Blocks are produced slowly. But that also means that there are not too many blocks, too many valid blocks in the final confirmed chain. So let's say hypothetically you've already reached consensus on the chain. Somebody has done it for you and you have to simply passively follow the longest confirmed chain. And so once the chain is confirmed because of the consistency, the safety properties of the protocol, you can actually download this confirmed chain without being affected by spamming. So you only require little bandwidth to passively follow the chain. So that's shown in the top figure. So you still have the same throughput. But what you need to download passively is just about twice the real throughput because you could have half of the blocks contributed by adversity which don't really need to throughput. Okay, so where is this going? We used an idea earlier from this reference on the top which is parallel composition of longest chain protocols. So the idea is every node follows one chain, one longest chain protocol as their primary chain. But there are many, many secondary chains where they simply passively download the confirmed chain. And so what we can get from this is okay, in the primary chain you are susceptible to a lot of spamming and utilize half of your bandwidth. But the secondary chains you can sort of download the confirmed portion of the chains more quickly with less bandwidth. And so you can fit in a lot of secondary chains in your leftover bandwidth basically. And so effectively what we get if we add up all these green segments in the lowest bar here, it adds up to roughly one fourth of this entire available bandwidth, which is good. So we might expect that we have to face some bandwidth loss because of the spamming attacks, but we're still able to at least get throughput one fourth of the available bandwidth and not like the small, tiny green bar. Okay, so elaborating a bit on what this parallel chain's idea is. So each node follows one primary chain. And each node is actually randomly assigned one primary chain. And they are responsible for maintaining consensus. So they will follow exactly the longest chain protocol with this precious block download rule that we have discussed. But this means they need to face some amount of spamming. So basically the ones who are ones who follow a certain chain as their primary chain are expending some amount of bandwidth and like taking one for the team in order to provide security. And so once these nodes who are working on a certain chain as their primary chain, once they provide consensus, the other nodes can basically follow this chain as one of their secondary chains. So each node follows many other chains as their secondary chain. And in the secondary chain, you only follow the confirmed blocks because these guys are confirmed and you cannot spam over them. Whereas these more recent segment of the chain, you could be susceptible to spamming. So this is the idea. And in the end, each node will accumulate all the chains, the primary and the secondary chains, and arrange them into one single ledger. And this throughput of this ledger, as we saw earlier, is at least about one fourth of the bandwidth utilization, which is in particular throughput that is independent of the security parameter. So that sort of eliminates this limitation of the throughput even under the bandwidth constraint model. So yeah, with that, we can conclude this particular talk and we can recap the goals that we set out for this talk. So we have a bandwidth constraint network model, which enables study of first security of proof of stake under these equivocation-based spamming attacks. And we also were able to use the model to characterize performance under network congestion, which we did now in terms of the throughput. And we already showed that the longest chain protocol with this download longest header chain, this rule is insecure. Whereas there is a simple download freshest block rule, which we are able to prove security for. Yeah, so that concludes the talk. Thank you so much everybody for joining us for today's Protocol Labs Research Seminar. I want to thank both of our speakers, had a great time listening to your presentations today. And thank you everybody else for coming and partaking in the questions at the end. If you'd like to follow along with any of the updates here, Protocol Labs Research, make sure to give our Twitter handle a follow at Proto Research. And you can also use it to keep up to date with the new seminars and new talks that we have scheduled for both March and April. All right, thank you again for coming today and we look forward to seeing you again. Thanks guys.