 So we will now have the instructor who you can talk about, or yes, we will talk about the correct construction Casper and Sharding, and you can join in. Yeah, hi everyone. So I didn't plan a presentation or anything, but I thought I would do an AMA on the correct construction Casper and Sharding, a product that I've been working on, has kind of like a technical conversation about consensus and about specifically this classic sense of protocol that I've been working on, I've published a paper on, and I'm still kind of working on publishing more and more information about. So this is like an opportunity for you guys to get some free publication information or some information to clarify the publication stuff. But if no one has any questions right off the opening, I can give the basic outline of the safety proof that all of the consensus protocols and the correct by construction family are derived to satisfy. So I would like to do that if no one has any objections, but I see a question. Yeah. Oh, yeah, sure. So hi, I'm Vlad Zamfier. I'm a researcher at Ethereum Research. It's actually like a Ethereum foundation. I work on consensus protocols and proof of stake predominantly. I have a few side projects, but that's mostly what I work on is consensus protocols and proof of stake. I guess I'm like a consensus protocol engineer. So the nice thing about all the correct by construction protocols is that they all satisfy the same safety proof and that the safety proof is really kind of simple. I can show you the basic shape of the safety proof. Basically we're going to consider this kind of structure where we have objects called protocol states and morphisms between them called protocol state transitions. And if there's a transition from one object to another, then there's also going to be a transition from one to the other from that one to a third, then there's also going to be a transition from the first to the third. So basically I'm saying, oh look, there's a category of state protocol states and protocol state transitions. And then there's going to be a map from protocol states to statements about the consensus called the estimator. So there's this thing called the estimator that maps these protocol states, which I'm going to denote like this, to propositions about the state of the consensus. So this would be something like, oh, this consensus is zero. The consensus is one. Oh, the block at this height has this hash. The block at this height has that hash. So I'm kind of a proposition about the value of the consensus. So you can think of it as straight up, oh, here's the value, or also as something a little weaker, like, oh, the value has this property. So the estimator is this kind of theoretical kind of map that maps protocol states to propositions about the consensus. So this is like the fork choice rule, for example, which maps sets of blocks to a single blockchain, which is kind of like the value of the consensus that you kind of are guessing. So this represents basically guesses for the values of the consensus that a node would make on any given protocol state. And then basically we have this definition of safety, basically that some proposition is safe at some protocol state, kind of if and only if, basically for any protocol state that you can evolve to, that property also holds for that state. So basically, if we have the property hold for every protocol state in the future, then that's somehow called safe. So a value of the estimator or something that the estimator kind of implies is safe if for any future protocol state that value holds. So if this block has this hash, if the block at height 10 has this hash at this protocol state and at all future protocol states, then we call that block safe. Or we call the proposition, oh, this block at this height has this hash safe. And then basically, you know, by the way, also there's a state transition from every state to itself. So that's why I didn't say that this thing also satisfies it because, well, if all future states satisfy it, then for every protocol state in the future state of itself, then I kind of get it for free. Okay, so now we are going to get to the safety proof, the kind of key part. So imagine we have some protocol state here where we have some state proposition p. Then, well, we have this lemma that basically says that, oh, look, if there's some other protocol state here, sigma hat, then this is also going to be safe because, well, if something is invariant for all future states, then it's going to be invariant for all future states of future states because future states of future states are also future states. And so you can show pretty easily that if some proposition p is safe at this state, then it's also going to be safe there. And then now imagine that we have this other, we'll call this sigma 1 actually. And now imagine we have this other state, sigma 2 here, that also evolves to a sigma prime. Then we're going to have the kind of following property that we're not safe on the negation of p. Because if we were safe on the negation of p, then we would also be safe on the negation of p there. But actually because of a property of this guy that I haven't talked about, it's impossible to be safe on p and on the negation of p at the same state. It's kind of intuitive because you can't even have both p and not p hold for any state. It can't be the case that both p and not p is safe for any state. And so basically if this guy's safe here on p, then sigma 2 here must be not safe on not p. And so just from the safe on p at sigma 1 here, so basically we had like, at first all we really knew is that we had two protocol states, sigma 1 and sigma 2, with a common future protocol state, sigma prime. And what this kind of meant was that safety here means safety there, which means that we don't have safety on the negation of p here. So this means that safety on p at sigma 1 implies not safe and not p sigma 2. And then by kind of just like a little bit of algebra, this means not this guy or this guy. This is just a normal kind of like getting, I don't remember the name of the rule, but you just get rid of the implication. And then by De Morgan's rule we get not safe p sigma 1 and safe not p sigma 2. So this conclusion here is exactly consensus safety. It basically says that, oh look, we don't have safety on p and safety on not p at state sigma 1 and sigma 2. So it turns out that this statement here, that safety on p implies safety, the absence of safety on not p and sigma 2, if sigma 1 and sigma 2 have a common protocol future, is the same as decisions on safe p or more specifically it says that p and not p are not both safe at sigma 1 and sigma 2 respectively. So basically all of these protocols are going to work on the following kind of premise. We're only going to make decisions on safe values. I should maybe mention that earlier. All the decisions that consensus protocols are going to make are going to be on the safe values. And so the decisions are going to be consensus safe for any two protocol states that have a common protocol future by kind of this argument that says that, oh, if they have a common protocol future, then it's not the case that they're safe on some proposition and its negation. So somehow this is like the basic shape of the safety proof. And then the next part is basically to guarantee that nodes have a common protocol future as long as there's less than some number of Byzantine faults. And so basically if we have a common protocol future, then we have consensus safety. It's kind of this part of the proof that I've shared with you, four decisions on safe estimates. And then the kind of part that I didn't share in the next part, which if you don't stop me, you'll go to, is that nodes have a common, you kind of, we may construct it so that nodes have a common protocol future as long as there's less than some number of Byzantine faults. So I guess now I'm going to pause and see how much I've lost you. So we have protocol states, protocol state transitions, an estimator that maps protocol states to propositions about the consensus, definition of safety that says, oh, look, some proposition is invariant in all future protocol states. We have this notion that, oh, if P is safe to signal one and there's a transition from signal one to signal one prime, then it's also going to be safe there, which additionally means that for anything that transitions to that, you're not going to be safe on its negation because then you'd have to be safe on both P and not P here, which is impossible. And this kind of gives us a kind of distributed consensus, a distributed safety for any protocol states that share a protocol state in common. So if you and I are two protocol states and we share a protocol state in common, then any decisions we make on things that are invariant over our futures have to be consistent because we share this protocol state in common where we could both end up and where all the things that are safe for each of us would both be true. So that's the basic, basic setup for all these protocols. And then things that vary between them are, oh, what are actually our protocol states? What is its estimator map? But in terms of the basic, since it's a safety proof, the basic setup remains unchanged, which is why it's pretty cool, one of the cool ones. And also why we can generate, generate because it's a protocol and make changes to them without changing the proof a lot or at all. And that ends up being really useful because you don't want to have to lose or we have to re-prove properties of your protocol as you iterate. And I have features, for example. So yeah, please. Can you give a short introduction? Why and how and... Sure. Yeah, sure. This is what you are actually doing and why is this important? Okay, yeah. Sure. So the question is, what am I actually doing and why is this important? Okay. So basically what I'm describing is a proof about certain protocols, consensus protocols specifically that... But what I'm doing is basically setting up a process for generating consensus protocols. And I could share with you some consensus protocols that are going to satisfy this proof. And if you believe that consensus protocols are useful, then you might think that this is useful. So in a consensus protocol, I could make that pitch too. It's useful for making reliable processes out of unreliable processes. They're good for if you want fault tolerance and the execution of computation, especially if you have non-commutative operations. They're a really great hammer in distributed systems. They really provide a really, really strong guarantee of replication. And they make it easier to reason about how to do a lot of stuff because you don't need to think... If you have consensus protocols, you don't need to think in a distributed fashion as much when you're designing decentralized systems. Yeah. How does a long-range tech finish this? For my understanding, the problem is that states that were safe in the past are now not safe anymore. Because the keys that sign this state are not safe anymore. Yeah, so do I know. The question is about what about the long-range attack? You know, isn't it the issue that like old states are not safe anymore? Well, I think it's kind of important to note that the long-range attack is really... It's an economic problem, a more than like a pure consensus protocol design problem. It's an issue to do with the fact that nodes that are unbonded are not incentivized to not double-spend. And therefore it will be very cheap for them to do that. And so we would expect the Byzantine fault tolerance of rates to be much higher from old states than from newer states by this kind of reasoning. And that doesn't really factor into this because this is kind of like not the economic story. But let me say that when something is safe here, we're talking about a kind of local notion of safety that a node will never achieve a protocol state that doesn't have something. We're not talking about consensus safety except for in the context of this kind of distributed safety proof. So the interesting thing about this is that it bridges the gap between a local notion of safety or like a local variance and a distributed one. When you have more than some number of Byzantine faults, as we might see or as it turns out, you can't have consensus safety even though you might have local safety. So it turns out to be impossible to build a protocol that will guarantee that two nodes have a common teacher protocol state that also has this proof hold in the context of 100% Byzantine faults. That is also non-trivial, meaning that it can actually decide our two inconsistent values. So if you have a protocol that never decides on anything, then you can satisfy this quite easily because you can always have common for teacher protocol states if you never make any irreversible decisions. It's the kind of irreversible decisions that make two possible protocol states not share protocol features. So what ends up happening in kind of consensus protocols is at some point nodes will be byvalent and at some point they're going to be completely committed on a value. And it's possible with 100% Byzantine faults that nodes will end up in one node will end up here and one node will end up there. Here they'll be safe on zero, here they'll be safe on one, but they don't have consensus safety. And so really when you're talking about the long-range talk problem, you're not talking about the local safety issues. Local safety stuff is just fine. You're talking about the consensus failure needing a lack of distributed safety due to an increased number of Byzantine faults, which is something that totally fits perfectly well in this framework. But this framework does nothing to guarantee the unexistence of Byzantine faults. That's more in the kind of economics and governance kind of like validator kind of management layers. So for now I'm just talking about like much more base level kind of consensus protocol shenanigans. But actually to continue on that, so with the economics, do we have a formal framework like this? Like how formal can we get, how good is the analysis? The question is do we have a formal framework? And how good is the analysis for the economics of the protocol? Yeah, so the consultancy and the incentives. So one thing is the protocol design and other is the analysis. And to some extent we know there are limits to our analysis that we're not going to be able to capture in our protocol design. But we do have models and basically at the end of the day, I think the foundational example is you have a smart contract and it wants to pay Alice to send a message to Bob or penalize them if it doesn't happen. But if Alice fails to send the message or Bob fails to send the proof that Bob received the message, then the contract doesn't know who's fault it is. And so somehow we have like a trade off, right? If the contract penalizes more, like fewer Alice's and Bob's will show up to play the message passing game because it's more unfair if for any given perceived rate of Byzantine Bob's, Alice will have a lower return if the Byzantine rate is higher. And so the simplest thing to think about is, okay, let's have a situation where we have a bunch of people showing up to send messages to each other just like in a really direct and simple way rather than in a consensus protocol. And think about, okay, what is the amount of participation for a given level of, for a given utility function, for a given level of perceived Byzantine faults, for a given background rate of interest and how many deposits will show up. And then think about, okay, as an attacker attacks and increases people's perceived rate of Byzantine faults, how fast does participation fall? And because basically the fundamental kind of most effective attack in any of these protocols is first to discourage participation and then to commit the intended faults because the fewer deposits there are, the easier it is to attack. And so like this kind of game where anyone can show up to play the Alice's message to Bob game is I think the simplest example I've been able to think of with all of the same kind of challenging economic features as the problem of incentivizing consensus protocol. But kind of more broadly about incentivizing consensus, the first thing to think about is, okay, well we want to incentivize people to follow a particular protocol assuming that we solve the problem of consensus. And then so we need to be able to detect people's behavior, detect their deviations from that protocol and penalize those. But specifically what we want to do is penalize the ones that cause a degradation to the quality of the protocol. So imagine that there is a failure and there are some Byzantine faults and some of those Byzantine faults cause the failure and some of them didn't. The ones that cause the failure are somewhat much more culpable and those are the ones that if you penalize you actually make the cost of failure much higher. But if you penalize people for any Byzantine faults, whether or not they cause a failure, then you increase the risk that someone will be penalized just to faulty hardware and software and that decreases participation and reduces security. And so the protocol needs to infer the participant's behavior, penalize only Byzantine behavior like truly malicious behavior in order to not penalize honest behavior in order to try to maximize the cost of attack. So my philosophy is very much like maximize the cost of attack first. And I try to do that inside like tractable models because things can blow up quick. I mean even quadratic utility turns out to be really hard to parameterize these things. But I feel like we're getting somewhat distri- So we have this economics thread and this distributed systems thread. I guess, you know, next questioner can pick the- choose your own adventure, you know? Anyone? Sure. So how does this tie into the cost? Is it added? Is that even working on it? Are you going to prove that disruption would be here? So it is different than the Casper's friendly finality gadget, which is the kind of protocol that Vitalik's working on, which is kind of an overlay on top of work that finalizes it in checkpoints. So the question is, oh, can I compare this to the finality gadget and do I plan on proving the safety of the finality gadget in this framework? And I've kind of- I have thought about that, right? Like letting the protocol states of the finality gadget be the protocol states in here and having an estimator. And it definitely seems to- seems to work. But the reason why this proof is so nice, it turns out in the end, isn't because we can show that nodes have a common protocol future as long as they have less than 7 over Byzantine faults. It's the way that we show that. We kind of construct a really, like, simple way to make that happen where I was like, to prove that for the finality gadget, basically it will require that I run through the finality gadget's safety proof as far as I can tell. So I can kind of, in a way that just uses Vitalik's safety proof, prove it in this framework, but I don't know that I can prove it in this framework kind of natively, right? So I guess that is a good kind of transition to talk about, like, how I make sure that nodes have common protocol futures as long as there's less than 7 over Byzantine faults. Simple, pretty awesome, even. So imagine that we were just to have- what? So imagine we were just to have protocol states that were sets and protocol state transitions that were supersets so that we have this lattice. Where basically, you know, you have, like, A, B, A union, B. And, you know, eventually we have, like, the empty set down here. So if basically if protocol states were- were sets of protocol messages and state transitions were receiving a protocol message and two nodes can always send messages to each other, then we would have this pretty trivially guaranteeing that nodes always have a common future protocol state. Because, well, it's A- if my protocol state is A, where A is, like, a set of messages and your protocol state is B, well, then we have a common protocol state in the future called A union B. And this is great because, like, oh, look, we've guaranteed that we have common future protocol states, which means that we have consensus safety, right? But unfortunately, because every two states have common future protocol states, we never end up with an event like this where nodes, like, don't have a common future protocol state, which is, like, specifically what you need for non-triviality. You need to be able to make inconsistent irreversible decisions. So what we're going to do is kind of this, right? We're going to say, okay, we're going to do this. But any protocol states that have more than some number of Byzantine faults, we're just going to delete. So if A and B has more than some number of faults, if the fault count of A and B is more than T, some threshold, we just delete that state. How exactly do we figure out the fault count from a set of messages? Maybe talk about it a little bit. But with this setup, now we have, okay, two nodes have a common future protocol state. So a node receiving having seen messages A and a node receiving having messages B have a common future protocol state as long as A and B has less than some number of faults, meaning that they don't have a future protocol state when the union doesn't have that number of faults. And that kind of is the construction. Basically, nodes have a common future protocol state as long as they, in the union of their views, don't exhibit some number of Byzantine faults because protocol states are sets of messages and they can transition to the union of their views as long as there is a protocol state and it will do as long as it has less than some number of Byzantine faults. And so the setup is basically relying on this idea that we can do Byzantine fault detection from sets of protocol messages. And it turns out we can. But I'll pause for questions first. But first, I guess I'll let me do an overview. So, okay, it would be great if we just made protocol states sets of messages and allowed any two protocol states to have a common future by looking at the union of those messages because then we would just have a sense of safety for everything. But we can't do that because that would have, we would have triviality because any two states would have common future protocol states which means that no state has ever made any kind of irreversible decision out of any consequence. And so, and we can kind of give ourselves this possibility of having two states that don't have common future protocol states by simply deleting all of the states that exhibit more than some number of Byzantine faults. So, here is like maybe a story. So this is, let's call this a set of messages A and this is a set of messages B and they have some intersection, right? And these paths represent sequences of messages from validators. So this is like a validator making a bunch of messages and some of those messages are in the intersection and some of them are not. So this validator here, so all these validators are honest but validator A doesn't see those messages and validator B doesn't see this message but there's going to be one validator here and they kind of equivocate in a way that like A and B both seem, they both seem to be honest to A and B but in the union of their views you can detect that this validator has equivocated. So in this case, basically assuming that default tolerance threshold is zero, B wouldn't be able to receive these messages from A and therefore he wouldn't be able to transition to this common protocol state A union B and similarly A wouldn't be able to see these messages from B and wouldn't transition to this state A union B and instead would instead not share a common protocol future but if this Byzantine fault wasn't there they would share a common protocol future because their Byzantine fault detector wouldn't stop them from going there. So yeah, please ask me questions. I feel like I'm not doing any communication here. Relative to proof of work you mean? So the cool thing about this setup is that it turns out that for example in Casper the Friendly Ghost we can finalize blocks in the sense of asynchronous Byzantine fault tolerant protocols with the same network overhead as Nakamoto consensus and also when we produce blocks they really just don't have to have a signature and so we're talking about no increase in network overhead over Nakamoto consensus and also a dramatic reduction in the cost of producing blocks cost of verifying blocks is a little bit higher because you need to check a hash at end of signature rather than just a hash although some hashes are pretty hard to do so you never know. So I kind of lost a listener here basically when I was talking about the state transitions and why the state transitions would or wouldn't be allowed even though the state transition is meant to be the superset relation it's basically only the superset relation from sets of messages to other sets of messages that don't exhibit too many Byzantine faults so in this case or really it's only a state transition it's like the yeah so in this case the state transition from B to B union A would introduce a Byzantine fault that wasn't observed in just B and so that's kind of the story there the story is that like A is a protocol state and B is a protocol state and A union B are not because A union B exhibit too many Byzantine faults so it's kind of like there were too many equivocations between A and B for A and B to ever be reconciled yeah so there's two kinds of faults really there's invalid messages and equivocations these are faults that aren't indistinguishable from network latency so there are these faults called liveness faults which aren't distinguishable from network latency liveness faults can't cause safety failures in asynchronously safe consensus protocols so in an asynchronously safe protocol liveness faults don't cause consensus failures and liveness faults are indistinguishable from network latency right so there's only faults that aren't indistinguishable from network latency that could occur really and those basically look like something's indistinguishable from network latency if it's the result of a different resolution of race conditions and so anything that has to do with ordering messages and ordering messages and timeouts it doesn't count and so basically any way that you can run the protocol in a valid way like is like the way that you can run the protocol with just liveness faults you know valid buddy synchronous right it's valid in any possible coordination and so there's two ways to run the protocol in an invalid way one of them is to do an invalid state transition to like go from like a state to another state or you and the only way that would be evidenced is with an invalid protocol message more with basically so through the messages that you see from a node they're going to evidence that they're having been at different protocol states and unless all of those have a state transition a protocol state transition through them then you can't tell then it's not plausibly honest so for example if I have a protocol node that's exhibited this state and this state well there is no single state transition through those and so that's an invalid way to have there's no valid way to have executed that protocol and this is what an equivocation looks like an equivocation kind of looks like oh there's no way for you to have as a single threaded protocol execution hit both those points and so instead we speculate that oh you must have run the protocol more than once or run a modified version of the protocol so basically as long as the evidence that they produce could have been produced by us a protocol execution then it's that it's not like couldn't have caused a safety failure and the safety failures basically are going to be caused by things that can't be called protocol executions so it basically amounts to invalid messages invalid protocol transitions and running multiple versions of the protocol so an invalid transition will just jump randomly and running multiple versions of the protocol will let you kind of go only valid transitions but to get to two different valid locations with a path that couldn't have been well in a way that would have been impossible to do with a single path of the protocol so you could sew with both messages you can what? sew with both messages you can sew with both messages yeah I could if you could sew with both messages you wouldn't be able to sew with both messages yeah that's not a safety problem that's a liveness problem so the observation is oh you can sew with both messages and yeah you can but those don't factor into the safety safety considerations they're only really liveness this is a liveness issue um anyone else? yeah hi do you see a use case where T is greater than zero? yes so the question is do I see a use case where T is greater than zero and the answer is like yes well any visiting fault tolerance we really want to have more than zero visiting fault tolerance normally and so that's what T is T is the visiting fault tolerance kind of number people normally like to have a number like a third but that's what T is yeah so you know it's definitely kind of important to be able to maintain protocol features with someone in the context context of a couple of visiting faults it's impossible to maintain common future protocol states them in the context of a hundred visiting faults if you're going to be making any kind of final decisions if you have any kind of notion of this invariant safety kind of property so anyone else? what proportion of visiting faults do you see a use case? sorry the question is what amount of visiting faults could it change? yeah what proportion they want to see? proportion yeah so the cool thing about that I've been able to do with this protocol is to allow every node to have their own fault tolerance threshold so I could run my node under fault tolerance threshold of like 30% you can run yours under fault tolerance threshold of 50% and like you won't lose consensus safety with anyone if there's less than 50% of visiting faults you won't lose consensus safety with anyone with a tolerance threshold of 50 or more I won't lose consensus safety with anyone with a threshold of 30 or more if there are less than 30 visiting faults and so basically in some way the fault tolerance threshold actually is not part of the protocol it's something that the client will input or that is like it's not a first order citizen in the protocol and so I can't tell you like oh it has like a fault tolerance of one third the way no people normally do but if you expect validators to be online most of the time and to perform well in terms of economic terms then I think we should expect very high levels of fault tolerance for consensus safety yeah hi so you have to use a number yeah so actually I think that's so the question is though why not just choose a lower why would you ever choose a lower fault tolerance number if the network is producing high fault tolerance and the answer is like yeah actually I think it's better for people to choose the highest fault tolerance number that the network will really produce safety on because that makes it more difficult and inconvenient for the network to produce less safety makes degradations in quality more costly to the validators yes hi wouldn't that theoretically need to raise towards zero fault tolerance because nobody wants to be let out then because we get that right so if you allow the time to set the a lot of potential fault tolerance and you know I'll be staying the same with everybody who except only for yourself so then everybody would in my in my opinion would then like to know the number of the quality except because he doesn't want to be let out so with the lower number that you have the more ways there are to be left out with a small number of faults so like actually the lower your fault tolerance threshold the more the more ways you can be left out and if you say if you have a high fault tolerance threshold you can you can basically switch to whichever fork the validators like like reconciled on after the like say like the attack that caused this low fault tolerance node to spin off and so actually I think the probability that you'll have to like manually intervene to re-sync with the consensus is going to be much higher if you have a much lower fault tolerance threshold because it takes less faults to cause you to spin off it really is like a lower fault tolerance threshold is straight up less secure for the for the users on that on that on like using that fault tolerance level basically because they it takes less Byzantine faults to cause consensus failure between them and other nodes so the only way that you can the kind of way that you kind of can get a lot of safety that way is assuming that all the correct nodes see the same finalized block first but that's a sketchy kind of assumption because like I mean exactly what the adversary would be trying to do would be to show this one of inconsistent finalized block you know in order to make notes kind of fall out of consensus so it's a very exciting thing that I haven't mentioned yet so this kind of consensus safety proof works great for like the binary consensus and the blockchain consensus and we have a few other consensus protocols the integer consensus we have a consensus on a list and a consensus on a concurrent schedule and all those are implemented and they have those like prototype in the correct by construction Casper repo on the Ethereum repo and now we're working on sharding because conveniently enough like actually this framework says nothing about the fact that which protocol states nodes need to achieve just about the fact that they have protocol states in common in the future that they could potentially achieve and this type of thing where if p is safe here the exclusion of the negation of p being safe somewhere else doesn't actually require that a node operating of this protocol state ever find out whether p or not p in order to enjoy the property of not safe on not p so it turns out that the safety proof doesn't factor against the scaling problem at all and so we get to pretty much exist entirely inside this context for the for the sharding protocol and so basically I have like a very similar I mean basically the consensus safety proof and methodology doesn't change at all I have like a protocol states that have messages from different shards and an estimator that maps onto like blocks for every shard so basically I have like a sharded fork choice rule and like a protocol states that kind of also mirror the sharding and it all fits inside the same kind of safety proof which is kind of why all these protocols are called like the cbc protocols because they're all derived and all designed you know specified to satisfy the same kind of safety proof and which makes it kind of super convenient when when generating new protocols or modifying the protocol like for example adding validator rotation to the protocol required no changes to the safety proof which is pretty cool like normally you have to write an extra safety proof to do no rotation but I didn't have to which is awesome yeah so looking for more questions you know also happy to adjourn the whole thing if you guys don't have any more questions please go up on security one note is active to do two different types of the protocol this is not being punished for this or have we solved nothing to say from yeah so absolutely we're going to penalize Byzantine behavior especially if it causes failure but basically the the goal is to penalize all malicious Byzantine behavior I believe accidental bug-caused Byzantine behavior minimally penalized it's a bit difficult but if people have a whole bunch of coordinated failures that causes consensus failure then I feel like oh well it's hard to pass that off as a as a random software bug have you coordinated a bunch of software bugs so basically it's the job of the incentive mechanism to penalize these Byzantine faults and absolutely that's tightly related to how nothing at stake problem is addressed in the security deposit base for stake protocols the question was are we going to penalize when nodes run multiple versions of the protocol and isn't that like what's necessary for nothing at stake anyone else I think we're oh yeah here we go I'm sure so great information the written project on stage development practice and we already have some vision of how some final implementation of some sharding might look like in the end the question is like oh do I have a vision of what a final implementation of sharding might look like in the end well yeah so I mean from my point of view the sharded consensus protocol provides concurrent execution schedule and assignments from different parts of the schedule to different shards such that if you're running a node and you want to sync up on any shard you can synchronize and do that and such that the semantics of the virtual machine on the blockchain have to do like basically they can take advantage of the atomicity provided by the block structure of this system so basically I have a very clear understanding of the basic properties that sharding the sharded consensus protocol will have how exactly that will relate to the transaction model a little bit a little bit of a more open question but I know kind of like what the consensus protocol is going to provide to the transaction model as an interface but yeah so it's a good question and I think it's still going to be time to find out exactly how the smart contract ecosystem moves to a concurrent environment okay fine one more adding to this question the state is giving up into different shards also so before it's under a nice smart contract we would call it would be in one shard and I would make the experiment call of another smart contract it would be possible in one transaction so the question is like would a call from one smart contract to another be possible in one transaction and so in my current like sharding spec the answer is like yes but the return value doesn't come back within the same block so yes great well thanks a lot for coming everyone