 I'm Aditya, a researcher with the Ethereum Foundation, and I'll be conducting the session. So Vlad is going to start the session with a few lines on sharding philosophy. So, you know, let's welcome him in. So I'm just here to set the scene a little bit for Aditya's talk on sharding, and to kind of explain some of the, like the basic philosophical difference between sharding in CVC CASPER research and sharding in 3.0 and 2.0 and most other sharding protocols in space. The main kind of difference is that we regard sharding to be entirely within the consensus protocol problem definition. So consensus protocols are protocols that allow nodes to make consistent decisions and to distribute a network, but normally, you know, we imagine that that means that they're making the same decision. In sharding, however, what we're going to do is we're going to somehow try to make consensus protocols scale by weakening that condition and allowing them to make consistent decisions on different parts of some state. So normally, consensus protocols, you know, we have a binary consensus protocol that decides on a bit. Blockchain consensus protocol decides on a blockchain. A sharding consensus protocol somehow decides on a sharded thing, which means something that is made up more than one part. So a binary consensus protocol fundamentally can't be sharded. It's just like impossible to shard a binary consensus protocol because it's impossible to have decisions that are consistent on different values in a binary consensus protocol. And it's also impossible to get any scalability out of that because the information required to decide on a bit here and decide on the same bit over there is like the same information. Whereas what we need for scalability is for the decision the information required or the protocol messages required to make a decision over here in this part of the state should be much less than the information required to make all of the decisions that the consensus protocol can make everywhere. And this is just not possible for a binary consensus protocol. So a sharding consensus protocol has consensus on some big kind of state with different parts where the nodes that make decisions only need a relatively small amount of protocol messages, a relatively small amount of information to make only the decisions that they're interested in. So it's kind of varies somewhat from the traditional or like the more easy to grok sharding approach where you have like one consensus protocol that's not scalable and then you have shards that kind of notarize into that consensus protocol and that where we use that consensus protocol is kind of reason about the... somehow not quite native to the consensus protocol decisions that are hanging out of it. And so we're just going to go through a lot of stuff, but I think going to feature a long conversation about the 4th choice rule that is possible to use in this way by clients using a consensus protocol where basically they'll be able to evaluate kind of part of the 4th choice rule only the part that's relevant to the decisions that they're interested in and on the particular shards that they're interested in. And in the CVC Casper consensus protocol family there's a separation of the distributed systems considerations and this thing called an estimator which maps protocol states to consensus values and this is kind of the place where the 4th choice rule goes. The 4th choice rule takes your current state and figures out like one fork. It's not really the distributed systems component and we'll spend most today talking about the 4th choice rule which is going to fit inside this kind of consensus protocol which is a distributed system that provides consistent decisions on the values. In this case we're going to be specifically providing decisions on little pieces of an overall very large consensus value. And so in this way we get scalability entirely inside the normal consensus protocol definition. So you know sharding for me philosophically is about scaling consensus protocols which are about making consistent decisions and you know scaling basically means that we make decisions about the things we're interested in using doing only much less work that is required to make all the decisions. And this kind of contrast from the Ethereum 2.0 thing because somehow if you're doing, if you're seeing it with a v-contain then you're somehow getting all the information required to make all the decisions approximately as far as the 4th choice rule is concerned. And so I think you know hopefully that provides like some context for the conversation and for some ways in which CVC Casper sharding is going to be different than Ethereum 2.0 sharding. None of that. Please welcome Aditya. Alright so I think we should get through this presentation rather quickly. So we gave me one and a half hour. I think this slide should be done in 40 minutes. So if you do have a question about the particular slide please do stop me but if it's something more general I request that you wait till the end. So the outline goes something like this. I'll describe some of the you know main features of the CVC sharding design. Some discussion on why it's better or how it's an extension of existing sharding solutions and then some open problems that we have with this proposal. So you know of course we want to come up with a sharding blockchain and a scalable blockchain and we know that there is something called the Scale of the Theta Lamma. For those of you who don't know this you can consider it as a triangle and each of these vertices represents some desirable property that we want from our system. So scalability, decentralization and security. What do these mean? Well scalable meaning that the entire system as a whole is able to process a large load. Decentralization meaning no single validator in this network or in this system is having to handle a load that corresponds to more than one shard or something like that. What this means is there's no requirement for a super load which processes all transactions everywhere or you know some sort of weird assumption where there's a load that sees all messages or something like that because we want you know consumer hardware hopefully to run each of these validator loads and not some mining firm somewhere in the issue. Secure meaning all shards enjoy the same level of security so it shouldn't be the case that you know making security in one shard is super easy but then other shards are really secure and have great safety properties and hopefully as a desirable we have that the security in each shard is as strong as consensus safety across all nodes for some good consensus protocol of our chosen. And so I'm going to assume that you guys are fairly familiar with these two proposals I'll use that as a reference to help me understand the CVC sharding design and so over here in this shard in this sharded system we have one beacon chain at the top and a number of shard chains that are a child of this beacon chain in the CVC design it looks a little bit like this so it's structurally different. You have a shard tree that has a bigger depth it can be an arbitrary shard tree of any depth of your choosing and well this is structurally fundamentally different than ETH2 so the main highlights of the CVC sharding design are going to be with respect to the 4th choice not the consensus are going to be the cross shard messaging system and a hierarchical sharding design so the main features of any sharding solution are determined by how cross shard transactions get through so it's fairly easy to come up with you know a sharded blockchain which is in fact just independent chains running on their own and no kind of interaction between two chains so for example Tic Tic Simica it's an hourly sharding solution which looks rather primitive now where it runs in phases so there's a sharded phase where shards process transactions that occur within themselves and then you stop sharding and there's a phase called process all transactions so you know this is just to highlight that the way you handle cross shard message cross shard transactions really determines the features of your sharding system and the hierarchical sharding is a different way to organize your shards and as we'll see later, arguably, it allows for better modularization of the shard space so cross shard messaging system so let's look at what the workflow for Ethereum 2 is today we have one parent chain which is the beacon chain and child c1 and child c2 which are both shards and we are trying to get a message from c1 to c2 so the first thing that happens is some user in c1 produces a transaction in that shard then it goes into the beacon chain and that's handled by the protocol meaning that there's some cross link community that allocates all the messages that are supposed to go to the beacon chain and puts it into the beacon chain and the last part to happen is some user on c2 makes that chain aware of the existence of such a cross chain transaction and it points, it refers the shard to the specific receipt on the parent with the cbc cross shard workflow, the first part is rather similar user initiates a transaction on the first shard the protocol handles the delivery of that message to the parent but then this last part is quite different and what we are aiming for is that the protocol itself handles the delivery of this cross shard message from the parent to the other chain and what this really allows us to do is give the user an experience of a single transaction on one shard behaving as or appearing as it it's a real cross shard transaction going from one shard to the other without any additional input from the user so let's look at this cross shard system a little bit more deeper so messages are going to be objects that contain the sender shard with the destination shard the target block in the destination shard and the blocks to the value these last two things are described in the next slide and we also assume that blocks in shards are going to contain logs of messages that are sent and then received so we maintain lists of what we have sent and what we have received and so let's consider two shards shard A and shard B and shard A is sending a cross shard message to shard B so the message originates from here meaning that it's in the send log of this block in shard A and the message says my target block in shard B is this so the target block tells you the starting point from which the other shard can receive your message and the blocks to the value is something like this so here the blocks to the value is 2 what it means is this message can be received in the other shard starting from this block up to 2 blocks later if the blocks to the value is 2 this kind of puts a restriction on where the message can be received in the destination shard and we really want to preserve we really want to make this sending and receiving of messages quite consistent across shards so we have an atomicity condition which says that a message that appears in the send a message should appear in the send log in the sender shard and eventually the receive log in the recipient shard and it so the message should appear in the send log of the sender shard and eventually the receive log of the recipient shard or it should appear nowhere at all so it's either sent and delivered or it's not sent at all so if we analyze this condition more closely it means that we want a likeness property which says messages sent but not received are eventually received and the safety property is that only messages that are sent are going to be received so we don't want it to be the case that some shard receives a message that no other shard has sent something like message appearing out of nowhere but no one initiates and we actually enforce this atomicity condition using the full choice rules of the shard and it's also dependent on what the shard hierarchy is I just want to mention that this safety condition is at finality so it could be that in the full choice rule at a particular moment it could be sent but not received but it will never be finalized that it was sent and not received you'll never have an atomicity violation if final even though there could be a moment in the full choice rule it's making some changes where there is an atomicity violation so hence the eventually although really quick so if it's never received does the sent get deleted? yeah exactly so there's are you going to talk about this one? so the way that this works there's two versions one of them is one that parent shard is communicating with the child in which case the child will orphan the blocks that didn't receive it in time if that parent still has the send in their full choice and the other version is when the child is sending to the parent in which case the child will orphan the block that sends it if the parent doesn't receive it so the child kind of follows the parent and so when there's an atomicity failure it's always because the child has a send that the parent hasn't received as opposed to vice versa alright there's more on your question in the next slide alright so we enforce these these constraints by the full choice and similar to these two what we have is that the shard chain full choice is actually determined by the full choice of the parent but here it's more general because you can have multiple children in for a particular shard so any child shard blocks that are referenced in the parent's full choice either by the version of a send from the parent to the child or the parent receiving a message from a particular block in the child are going to remain in the child's full choice and this is the condition we enforce so the first case is the parent sending a message to a child so if in this block in the parent there is a message which says this is the target block in the child then everything up till here up till this block is going to remain in the child's full choice and the other case is where the parent receives a message from the child which originates from this block which means the parent kind of references this block in the child so we enforce the condition that the child's full choice always includes this block and so that I hope answers your question if the parent receives a message it's always going to be sent in the child's full choice and if it's the other way around the parents send a message then the other child block is going to be in the full choice if the parent originally sent a message to the child and then that block got full talk of the parent's full choice then the corresponding child block which receives it also gets full talk because we don't want it to be the case that the child is receiving a message that was never sent from the parent so the parent full choice is quite independent except for a few restrictions that are self-imposed because of the way the parent full choice behaved in the past so for example you can't have the parent let's say in the past sending to one fork of the child and then later receiving from another fork of the child because here with this first message the parent instructs the child to always keep this block in its full choice and then later on with the other message that is receiving from a conflicting fork it's asking the child to always keep that block in the full choice but these are actually incompatible blocks so those are restrictions on the parent full choice that we impose and you know that's all nice and we have this full choice and messages getting around two different shards but how did they actually get there well it's not the internet where you can just say I send a message to the other guy there's no guy at the other end it's a shard so who are we really sending messages to so let's see how messages actually are passed from one shard to the other so it's considered the case of two shards a parent and a child and we are trying to get oops let's look at how validators are arranged in these shards let's say there's a validator on the child's shard the condition that we impose is if you are a validator on the child you are also a validator on all of its ancestors so what it means here is that this guy also wants to validate the parent shard and let's see if we are trying to make this this cross-chain message happen from the child to the parent so a user initiates some transaction on the child the specific validator sees that message it retrieves it from the child's shard and then puts it in the mem pool of the parent shard and hopefully if there's lightness between this cross shard route then that parent receive picks it up from the mem pool and then puts it in one of its blocks and the other way around is also quite similar about how messages get from parent to the child actually allows us to do really complex routing functions so for example if we have this kind of a situation where you have a parent and children C1 and C2 and we are trying to make a message get from C1 to C2 so let's look at how a validator sets a range here so we have a validator on C1 who is also a validator on the parent because of the way because of the rules being posed on validators and on the other side it's going to be the same thing if you are a validator on the other child you also have to be a validator on the parent and we are trying to make this cross shard message go through so it appears in the chain of C1 Goku picks it up from C1 puts it in the parent and then Vegeta picks it up from the parent and places it in the other child and this really allows you to make complex routing functions happen because with hierarchical sharding you can arrange your shard tree pretty much as you like so let's look at some prototypes that we have put in the past so maybe some of you have seen this animation on Twitter so what this really means is there is a parent shard, there is a child shard these green horizontal lines are the blockchains of the shards so and these boxes over here are actually blocks these vertical lines are messages going through so let's wait for it to reset so you can see messages getting from the child to the parent and if you look closely then there are also orphans messages that were once sent but never received or once received by the child but then got worked out from the parent and so on and the interesting thing is because this is all 4th choice based you can also change the hierarchy of the shards so here is an example where shards 0 and shard 1 are actually switching their parent child relation so what these prototypes show is this thing is not complete bullshit and some of it actually works and we can even go one step further and make an arbitrary shard tree like this with 7 shards and change the hierarchy pretty much arbitrarily as we like and it still works it doesn't disrupt reality and if you want to find out more about this cross shard messaging system that you can read and you know now let's take a look at hierarchical sharding and what advantages it has there is also on the github cvc-casper proof of concepts implemented so you can also if you have a lot of will to read software take a look at that it's not that bad I think it should be okay to read a few like reading software right so this is a depiction of e2 and let's say the two contracts people care about most make a doubt when you swap and they exist on this shard which shard would you make your account on these are the two contracts you care most about you're going to want to make it on this shard so that every other transaction you make doesn't have to be a cross shard transaction and I'm sure there are some really popular accounts really popular contracts that everyone wants to be in the shard so with hierarchical sharding we can actually alleviate this problem a little bit not completely is we kind of provide equal access to the most popular contracts by placing them in the root so any account on any of these shards has equal access to these accounts on the root and also let's say some less popular contracts like Compound you put them on shards on depth one and anyone in this subtree has equal access to these accounts so anyone who cares about make a doubt but not Compound will want to exist on these sub-trees but not that one and so this kind of depicts how you can modularize your shard space better so any problems in this design well I think the biggest concern is the message load that is put on the root shard because most of these cross-chain messages are going to be routed through the root which means that the validator is on the root shard which is everyone has to process these so if you want to consider this example where let's say cross-shard messages account for 10% of the entire system load and 5% of cross-shard messages are uniform and uniform here meaning occur between randomly chosen pairs of accounts well then if you have a binary shard meaning the root has two children left and right then you know accounts are equally distributed between left and right then this 5% uniform messages are going is going to be divided by 2 for the number of messages going through the root so 2.5% of all messages are going to go through the root and this actually corresponds to since the cross-shard messages account for 10% of the system load there is 0.25% of the total system load is going to be the burden in the root shard which puts a fundamental limit on the scalability you can have because of the decentralization desirable that we have that each validator should not have to process the load corresponding to more than one shard the 0.25 number here corresponds to a 400 times scalability but well this example is rather skewed because we really shouldn't be assigning probabilities to things we don't know the behavior of so we really don't know how this distribution behaves so it's just a depiction but so a few possible solutions for this are well you do some sort of ETH2S design where you come to consensus on only the roots of these cross-shard messages you aggregate them in some structure and only come to consensus on the roots of these messages you don't put the actual messages in shards you put the roots of the aggregated messages and you make the preimage available in some other way so you can put it on some chain like lazy ledger which means that everyone downloads all of the data everyone downloads the parts of the data that they are interested in but not everything so that kind of reduces the load on consensus so you can come up with some sort of data availability games to make messages sent by one shard available in another show without putting it in their bed well the other problem is load balancing across shards so as I mentioned earlier hierarchical sharding allows you to modularize your shard space better but meaning that you can place most popular contracts on the root but then how do we decide where the most popular contracts are there is a high social cost to placing contracts on the root shard because then all validators have to keep track of its date it's going to be replicated by all the validators so there is a high social cost to it and we need to decide how we place things in this sharded system and where do we place things so that is a kind of problem I guess this is probably the last slide which says how do we do a validator set rotation across steps let's say let's consider a validator who only wants to validate this shard so the guy only has to keep track of the sharded depth one and the parent so you have to validate two shards and you can get away with having some small hardware industry grade hardware for it but for someone who wants to validate this left path that guy is going to have to keep track of three shards and maybe it takes industry level hardware to do that so we need to maintain different classes of validators in our validator set that correspond to different hardware and we need to come up with a way to validate this rotation across one of the solutions to do this is you have a static complete binary tree of a certain fixed depth to do this and you just assign validators to leaves but it takes away the dynamic shard tree structuring that you can do so why can't you change the structure of a tree with bounded depth and just while keeping the depth always bounded like I mentioned the depth is max 10 why can't we change from arbitrary tree of depth 10 to an arbitrary tree of depth 10 without ever going through a tree of depth open well you can but then you can't make any well I guess you can but I'm sure there's only some transitions that you can do and not all okay that's it for the presentation I guess we can open up for questions I wanted this to be more of an interactive session so we can take questions for a long time hey thank you very much very very good presentation I want to just say in that example of the Macadal contract being in the root let's assume that's really popular but isn't the maximum throughput of that Macadal contract in terms of the transaction still limited to the capacity of a single shard how is the Macadal ever being able to do more transactions than that or is that not possible yeah that's true yeah that seems fine though because well let's say you only put the most popular contracts on the root shard so there is no other load that the root shard has to bear I think you have to have the same problem no you have to if you want it to run Macadal transactions in parallel you need to open up two different contracts and let them run together sure you could shard that contract as well if you want you'd probably make two of them yeah you may as many as you want so if you're bounded by let's say TPS and you can shard that contract very easily you just have multiple copies of that for each shard of three or something yeah okay that sounds a bit like a recursive problem because Macadal itself tries to keep balances and consensus so right if you shard it then you need to have the consensus like you shard it again or something it doesn't seem like that but if you have a single ledger parallelizing it it's like you want to have like one balance so you can parallelize it I just wanted to check that I understood with this construction basically Macadal just has the capacity of one shard right that's it but in any sharded system any single smart contract existing on one shard has the scalability limit of that shard okay I mean that's a fundamental issue in all sharded systems so my understanding was that the cross-start shard messages are asynchronous right well asynchronous in one sense with respect to the full choice no but I guess what I'm wondering is if I was to run you know a like I have my smart contract they don't want to interact with Macadal if I'm in the same shard as Macadal can I do sort of like a synchronous you know there's no claims about how the execution environment execution layer is going to be I'm not sure if you can compose smart contracts using these smart using these cross-shard transactions or not so I'm not sure how the execution layer plays with this okay we have had some proof of concepts with this for asynchronous calls something like you just you just follow a function on the other shard that you don't ever expect to hear back from right that's something you can definitely do but I'm not sure how you handle it if you expect it to return some value to you or execute some function and get back this or something like that so the thing that this architecture provides is the cross-shard messaging is to say exactly what the semantics of the virtual machine will be and for example we send a call is your smart contract going to be frozen or is it can anything re-enter and then but then it will continue after the as if concurrency works like re-entrancy today or whether the smart contract is going to be frozen and wait for the transaction to be over all of these things are implementable in the same communication model from the point of view of the sharding even though from the point of view of the smart contract's communication model will be quite different and so I think in hope that we explore different models for smart contract communication and see what people like and try to support many of them because you know it shouldn't make a difference to the sharding architecture you know at least not at the level of the four-choice pool so how do you decide on the topology opportunity like who is deciding on that well I'm not sure the load balancer the load balancer decides load balancer needs to figure out what should go where in order to maximize the throughput and minimize the overhead of the system some things need to be closer to each other because they communicate more often something less often it should be further away from each other so basically it's the load balancer's responsibility to balance the shards and to figure out how it can minimize the load of communication minimize the latency of communication there's actually a number of goals that the load balancer has to achieve and it's the load balancer's responsibility to make sure that the shard hierarchy is useful for the particular virtual machine configuration that people are going with the users are you know I don't know what the word is the behavior that users are creating through their aggregate activity can you change up with that? so this GIF actually depicts a changing shard hierarchy so I think one example of load balancing would be let's say there's some contract XYZ here and it turns out lots of people are using XYZ and they all exist here but none of them use compound too much or none of them use what's on this shard too much then it makes sense to attach another shard which is the child of this guy but not here because you need to be close to this so efficient clustering is a problem that the load balancer would solve when you explain what is the load balancer and if so where does it go I'm not sure if it should be a smart contract or it should be handled in the protocol but it's something that that kind of efficiently clusters frequently used accounts together but it's got to exist in every layer yes and it needs some information about what's going on in its neighborhood as well so that's actually a topic of ongoing research we are not sure what the best way to go about it is but there are a couple of strategies out there the load balancer can also create new shards sorry what the load balancer can also create new shards yeah I guess the load balancer instructs when to create new shards and what to place in the new shards and so on so basically there's a number of shard rotation operations you know you can move a child to upper level so that it's now a sibling of its parent you can move of one of the children to be the children of one of the children and there's one more operation where the root shard changes which is like a different operation and the root shard swaps with one of its children it can't really there's no other way to get rid of the root shard other than to swap with one of the children so there's these three basic operations to go from any shard architecture with n shards to any and the load balancer basically has these operations as its interface that it's going to use to change the structure of the system and yeah it can also destroy and create shards which can't destroy or create the root shard or any shard with children you can only destroy and create shards when they don't have children and so with those operations the load balancer that has the interface of those operations can be specified to do any of these changes does it mean that the validator I should actually try to get the new shard like a follow because of things like rebalancing that we can then try to catch up like getting the data I'm not sure if the validator is not in this load balancing function as a validator should I follow and say that there is a new shard I think that would happen I think load balancer well no, there would be a separate validator set rotation function that tells you where you're supposed to be validating would you say it would be a tunisical balancing system or tunisical balancing so I mean it is hierarchical in the philosophical sense as well so this guy is the parent of this subject load balancer here determines what goes on in this subject load balancer here pretty much determines what happens everywhere but it is hierarchical yes yes it does for blood I'm curious what your thoughts are about like an archaic 0vm and like that if you think did that kind of model a virtual machine that's able to pull threads apparently would be better for this sharding model so this sharding model is independent of the virtual machine architecture and so I don't think it's really like the right level for the question however it might be useful for describing so right now the way we're operating is we don't know anything about the semantics of the smart contracts at the level of the sharding so it might be possible that if you give the load balancer more information by having concurrency information readable from the virtual machine terms that it could do a better job but at this level the specification we're not looking inside the smart contracts at all and so it's not going to do anything and if you wanted to look inside the smart contracts to do the load balancing I mean that would be really cool if you have an edge but I can't tell just keep going right now so you said you have a block I forget what the block is or a message so how do you know what's the block you look at it's just something you have to determine it's like what the gas limit should be that should be a limit if you put that as infinity what it means is your blocks can be received so the reason to have this is that let's say a child sends a message to its parent and the blocks to their value is 2 and the parent chain progresses beyond this then you know that that message will never be received and then you fold that child out you fold that block out in the child if you set that value too high then you're in a between deterministic state for too long you don't want it to be too high and you don't want it to be too low because maybe the mempool doesn't process transactions that fast but the validator doesn't know anything about traffic so if this is a child and this is a parent child validators or also validators on the parent so they have a lot of information this way and for a parent the child it's actually authoritative the child has to receive a message if you know this is the parent chart and let's say this was the latest block in the child and blocks to it was 2 then after if this block does not receive the parent message it's invalid it is authoritative that if a parent tells you to receive a message you must receive it by its block to their value that means you must create a block by then otherwise the block is just invalid what are the performance tradeoffs I think there is an override to the load balancing function in general because you have to figure out what the current behavior of the network is assuming load balancing is this assuming just reorganizing the chart I think validator sets have to be reorganized because if you are shifting it to a different depth then you need validators of a different class who have hardware that can validate charts at depth 4 instead of depth 3 otherwise you are limited to reorganizations of putting one chart into another place of the same depth so definitely some tradeoffs there but I don't think we have thought about them yet how do validators are joining the chart is it just like a validator because its message will be complete like a chart so we haven't specified that yet actually there is lots of existing research the you two guys know how to do this it seems like a standard way it doesn't seem anything too special like why then 2 in this case like why you are writing that 2 blocks oh this was just an example it can be pretty much anything that you want I thought that it's 2 is it just teaching of dynamic validate that's it it's just a depiction it seems a marginal issue cost to validate child so anyway the marginal cost of validating another child of the American validate would be cheaper but if it's the same path if it's a different path then you need lots of other stuff a bit related to my earlier question but when I understand the local answer correctly basically the one who is most popular is the guy who is getting most messages in some sense as well so he is going to move up into the root higher and higher because the higher you are the more children you have so this guy is going to receive tons of messages but I think the number of messages you receive doesn't depend on where you are it's just a virtue of what your smart contract is you are also rooting right for this so you are receiving more if you are up if you are up you are skipping less actually because you go up only if so you would move up from let's say this chart into this chart only if there is lots of users over here who are sending cross-shared messages like this into your contract so then you move it up here lots of people in this sub pre are trying to communicate with your contract so you know let's say that your ERC-20 contract is existing here means that for any other chart to change the assignment of ERC-20 to addresses you have to communicate with this chart so it seems like it's natural for it to exist on this chart system it doesn't seem like anything breaks yeah unless you want to split that chart I'm not sure if that function makes sense splitting a chart well it's not specified that you are assigning the charts based on your address well whatever the metric is it's free I think the load balancer decides what a constant is on a chart I definitely don't think it's safe to a lot of validators to choose what they validate just like completely unsafe you could just choose where to attack you could like choose where to block a holding choose where to like so like the notion that validators get to choose what chart they're on has never never even crossed my mind I might feel a validator oh you're talking about your account yes it's your account also you don't get to choose where that is load balancer decides where your account goes no load balancer moves it forever of course and you don't even need to know what chart it's on you need to make your transactions get routed to that chart I don't care what it's on no you won't care because at least in our model like the way that I like is there's the same gas price everywhere even having different gas prices different execution environments not one bit to me the natural model is the same gas price across all charts same execution environments across all charts whatever 3 and 1.x comes out with that's my personal preference at the end of the day this isn't just up to anyone but to me just as a matter safety responsibility like it shouldn't be that people get to target charts for price performance validation or anything what about message congestion there so if your parent you get tons of messages from three or more children like is there a priority between them or like what happens if it's congested I'm sure at this point but there shouldn't be a priority among which messages you choose that is that is a concern that is an open problem oops and then the possible solutions are you don't make messages appear as is on the parent you just put the root of aggregated messages on the parent and then the the pre-images of that root are available elsewhere so that reduces the load on the parent what does it physically means that some message lives for several blocks where is it physically so well when I say blocks to live it means you can't receive this message within those many blocks so it defines where you can receive the message it doesn't well you can maybe it means included in your receive log in the particular block what is the message can you just like elaborate more well the message is you know some object that has a payload maybe it's a transaction not sure what it is is it like a statement of the chart finalized statement no in this context when I've been saying cross-shot transactions I mean them to be actual EVM transactions and it's actually putting the validity here like if malicious validator will bring her like her own message to the parent how it's meant to be checked so I think be very similar to how ETH 200 is this that you have a committee delegated to your child with signs of that these messages are in fact valid there is a finalization happening on the chart what you have just said so then it's finalized finalized check point which is actually transition I think you should be able to receive messages before they are finalized as long as someone attests that these messages were in the phone choice at the time also related to the blocks to leave it seems like so you risk going back because you don't receive a message you risk removing part of the chain yes the child just removing or filling some of its blocks because some block in the parent was offering but you also commit that you're not going to remove more you said that you commit that you're not going to no the child has no such can except that when it sees something on the parent finalized it knows that none of those blocks can be offered and hence you can come up with some sort of a limit as to up to what point your chain might be offered I mean the blocks to leave value doesn't actually put restriction on how far you should be rewarding back it's just so that there is a very specific range of blocks that you can receive the messages it seems like you could have multiple messages that start at different places and each one of them will cause opening opening some blocks and because you opened those blocks now you don't receive another message that happens for sure so even if one message even if the child has received one message from the parent that later gets orphaned in the parent you orphaned everything that has happened since but then there would be another message that you just orphaned but that's fine because if it's still in the choice of the parent you receive it in the other form of the child so you only have to remove the message that was orphaned from the parent from your folk choice all the others you can receive again in your other form in the child I want to know if we are charging like to DNA as we are the validators of our parents as well as our parents validate us and we have all those teams all those information in us but if we want to go back we have to like go to our parents okay that's interesting that's something we haven't thought about but that's a creative way to put it does the blocks to live get influenced what you should say that to is that influenced by the width of the trade I don't think so the blocks to live is just some parameter you come up with at run time you just said like as low as you want because then you get your thing quickly so the parent might put it as low as you want and the child is supposed to receive it but if the child sets a really low blocks to live it might just end up in the parent's mempool you never get processed and hence the block you just have to orphan that same is that like something you include with your trend like each user is like with each message it specifies a target block and the blocks to live I mean it's like in f-trend everyone chooses their own amount does everyone choose their own blocks to live or you know their clients choose it for them by looking at the current state of the mempool I'm not sure you know in our proof of concepts we just use the constant time to live I don't think that I feel super 400% comfortable saying that like yet your smart contract will be able to choose the time to live for its message because that could potentially increase the overhead of the solution however it's definitely not a function of the width of the trade because the time to live only has to do with the amount of blocks that you have to be received by your parent or child so only has to do with the immediate neighbors you know a wide tree and something like two end hops or whatever to go to destination and back it's going to take two end times time to live you know in some way latency to do that because for every hop you have the time to live so is it like 30 like it makes one hop kind of per block is that the idea I mean you might take a few blocks to be received and basically the idea is that we do optimistic execution of cross-shard transactions and everything will go well in normal case it all gets executed and finalized and you know if it doesn't if it doesn't things can be orphaned and reverted and so you ask and then the time to live you know if it's like five blocks to get in and this is not measured in the sending chart but in the receiving chart so I don't know if it's really meaningful for the sender you know if it gets in in one block necessarily because I'm not going to get a message back for a while so no matter what it's not like they're going to get it back in a block if they have a time to live and there's tables called back right away then it's going to depend on where the parent puts a source on the child so like whenever you send a message you have to kind of say oh I'm sending it to this fork and time to live starts from that block and so depending on where the parent puts that source it might take longer to come back to this chart I guess it's like it's a bit complicated and there's like the range of times that it could take for a message to get back is much bigger than more helps you have to do there it appears in the reshared in the path so it's stored in like a smart contract so it's stored in the block that receives each block has a log of messages that it sent and received so just like you include transactions that happen in this block you also include these send and receive messages so does it mean that at a much like stage that I need to finalize the check point to be able to go with like a forward or no so you don't have to wait with finality if it appears in your parent's fork choice that says if there's a block in your parent fork choice that sends you a message then you can receive it you don't have to wait for a finality you entangle the fork choices you entangle fork choices yes is it possible for different parts of the network to accidentally send a message to sure so in this in this prototype actually so I think shard 3 moves around or something like that so even if the other part of the network isn't aware of the current position of that shard it sends a message and it doesn't know what the final destination is and those messages just keep getting out until they reach their destination can new shards be created or removed so what happened if a message was sent to a shard and it doesn't exist I don't think we have removed the shard yet adding the shard seems fairly easy removing the shard seems much harder because of the concern you said so we have thought about removing shards I guess if you had to remove shards you basically keep a track of you know what address you want that message to go to and then just find where that address is in a different shard so for this fork choice loop we have a paper we have a cbc draft paper that came out last and in the example section we have the final definition of this shard system so it describes the fork choices and the consensus protocol and the final option but that's the same task I'm not sure what that means so like for instance what you are describing and some of the problems that arise this could be a wild guess but seems to me that you may be able to model this as a shift a shift purpose and basically things like removing a shard are difficult because of the new condition of the top space and things like messages and you know re-composing shards and whatever basically natural transformations between these sheets I know it's like complicated mathematics but it feels to me that if you find a formal structure in which you can translate you find a formal structure that models facefully what you are trying to do then answering some of these questions just becomes a key to compute some stuff from those structures which are impure to affect our model that's interesting I think we'd love to know more about the ideas that you have even if we don't have those particular ideas we do use formal methods in all of this from the very very start and so you know even if we're using unsophisticated tools I might have to do a lot more work you know we still at least hope to get the kinds of guarantees that maybe you hope to get from formal methods yeah but what I was trying to say is that you know when you do formal methods you can look at them as let's say a rational personatic perspective so you're saying okay I formally implement them or whatever I specified works or you can do it denotationally and in that case you are not really looking at formal methods for recommendation but to prove general properties about your system which are global it's probably not useful from a practical point of view but it's useful if you want to answer some conjectures that's what I'm trying to say sorry so so denotational semantics and what's the other one operational can you define these terms for us yeah so operation is just like you have let's say an algorithm that basically you want to model how a machine executes that algorithm so you basically model formally what happens step by step at a very like low level and when you basically doing things in this way you can be sure that everything gets executed except the answer is specified but obviously this doesn't say like a lot about the global behavior of the specification while denotational semantics is like saying for instance what I'm trying to do defines the precise algebraic object and then you can prove theorems using algebra so if you say this is not the case clearly but that this algorithm is actually giving you the definition of the group and then you find out that a property like finality say instead this given subgroup in the group is normal then you can just prove that that subgroup is normal in the group and that's what the denotational method tells you and it tells you nothing about how faithfully you are going to implement it's telling you something about the properties of the algorithm and maybe that would be useful so is a type in the type is formula interpretation and denotational there are links between denotational and operational but I defends correctly what you would say it's very rare to have this like to have a way to go from denotational to operational actually there is this super strong property that never happens where your denotational semantics is your operational semantics very very very few cases of very like super simple algorithm is that satisfied can you give us a talk are you giving a talk about this next to you we can end the talk if anyone wants to discuss more we can chat offline