 Very cool. Welcome to the sharding implementers call number zero This is quite the turnout. Hey Justin Excited to see all these people that are working on sharding So just a few Procedural things I are the plan is to do these every other week on the weeks that we don't have the core dev calls Sometimes the core dev calls get back a week Call back a week, so we don't have two big calls in one week I Figured We have a lot of us, but if you just go briefly, you know one by one Say your name What team you're working with and that's that'll probably be fine You can say like where in the world you are and And then we'll move on to begin talking about like actually the different teams and what's going on right now with the teams I'll start. I'm Danny. I'm with the theory research team And I'm in New Orleans I'm Justin also from the research team at the foundation and I based in Cambridge, UK Hey everyone, I'm Raul. I'm with I'm with prismatic labs, and I'm based in Chicago I'm in color work from party technologies. I'm based in Moscow. They're kindly looking for My name is Mikhail I'm from harmony team Based in Russia In Omsk in Siberia. I'm Dmitry. I am from harmony team and the terium J2 You know and from Russia San Peters book. Hi, I'm Ben Edgington from Consensus Pegasus group Working on the Pantheon client. I'm usually based in UK, but today in Switzerland I'd like to welcome Olivier Pegasus to The call he joined our team yesterday Hi there, I'm Olivier as Ben said I jump yesterday I'm based in Paris and I work with Nicolas Lyochon's on vacation I'm with Pegasus Hey, Preston here from prismatic labs and I'm based in New York City Hello, I'm Yannick. I'm Working for brain board but helping at the Python shouting implementation and I'm from Munich. It's very hot right now Hello shower from Eastern foundation research team based in Taipei Vitalical account in three minutes By the way Hello, I'm Sorry, I'm Mamie from status and I'm based in Paris Hello, I'm Ryan also from status Hi, I'm Jared also from status working on Nimbus. I'm everywhere And I'm yet to go so from status Right now in Costa Rica. I'm Chris I'm doing research on developer practices in scaling across layer one and layer two solutions and that research is funded by the Ethereum grants Foundation may may grants cohort. So I just listening in today and taking notes I'm Paul from Sigma Prime will work you on lighthouse and I'm out of Sydney My name is Adrian. I'm working with Paul Sigma Prime in Australia as well currently in South Africa Either working on Decentralized staking pools or at the moment trying to throw together a Viper BLS Implementations some some stuff. I'm Chisholm. I'm working on PDOP with Kevin In a research team in Taipei Hey guys, this is Lane Reddick. I'm on the Iwazen team and I'm in New York City where it's also really hot and humid right now Hello, I'm Kevin and I'm from if you're a research team and I'm basing Taiwan Nicholas I'm also part of the if you're a research team. I'm also from Taiwan Hello Hey, Vitalik. Hi Hey We're going around just giving a brief intro who we are and where we are and what team we work with Okay What was that? Where are you? Oh, where am I? I am in Toronto right now Cool Anyone else I'm Mekira. I'm from change safe systems. I'm currently based in Toronto right now. Oh, cool Yeah, I'm Casey. I'm with the Theorem just some based in the US the state of Michigan, but currently in Florida and Surprise we went through it and apparently random order without many collisions Yeah, pretty solid. Yeah, we may find a new consensus mechanism. Yeah, this is exciting Just go when you feel like it make a block Say I guess I'm the last one Cool Alex from E wasm and solidity base in Ireland Okay, so I know there are Various clients various kind of proto Implementations working on sharding or planning on working on sharding if one member from each team Can go through and give a brief update on What's going on in their world related to that? E wasm could potentially give an update to well, we will save that for I guess But if you're planning on working on a client working on a client Working on a beacon chain implementation something like that someone from your team. Give us an update. I Can call out teams. I don't remember all of them probably off the top of my head But how about someone just go or I can pick a team Paul. I'll give us an update on lighthouse I Good choice So, yeah, it's coming underway. We're just waiting for the spectre finalized We've got we got this first round of spectre kind of implemented in this transitions in but it's changed So now we're Like kind of well in the next maybe week or so we're gonna get on to that We're playing around with P2P just trying to get like some P2P instance running and some interface with it So we can swap it out later I'm trying to play around to try and find some VLS aggregate implementations that work and that are somewhat safe at least at least a tiny bit safe Great I'll get a date on the kind of the Python beacon chain repo that yes people have working on I'm Almost done with the v2 one update I Have a PR there with some like proof that it Kind of works testing and I'm gonna hand that today. Hopefully I'll have like a decent reference of the v2 one by the end of the week does anybody on or Yeah, okay, who how about Nimbus? Okay, so I Started implementing the beacon chain v 2.1 two weeks ago the thing is at that point the reference implementation in Python was Disconnected from what was in the Hack MD stuff, so I started the basically from scratch and trying to get as far as possible And also I explored what we could use for BLS cryptography and started the wrapper in NIM for Milagro crypto though I Have some reservation about I think we need to build something from scratch Regarding crypto to have something proper great, thanks and Prismatic labs who's renamed their client prism, I believe Yeah, guys lots of updates. So we you know, I'm not following your word, but we migrated away from death Given the spec we realized that it's best to just really have you know an independent Ethereum 2.0 implementation So here's some of the stuff that we've done. We have local network P2P via MDMS through gossip sub We fully transition into our own independent project We have a full beacon node running and the sharding client that runs a separate process They communicate with each other via gRPC We finished the incoming block sync processing conditions. We have crystallized in active state transitions Shuffling of attestors and proposers obtaining the cut-offs and announcement of blocks via P2P We have the validator registration contract that is you know the through a deployment tool that we created We were able to read the logs and register and induct validators into the queue and then we also are currently working on the fork choice rule and kind of Doing initial chain sync aside from that we created a simple like kind of simulator tool that allows us to Like basically simulate incoming blocks and we're going to be using that to basically try out and test our system From an end-to-end basis going forward. We've been exploring a lot of different things and we have a lot of you know A lot of ideas around kind of P2P communication. That's something that my partner oppressed in here We'll be able to talk about later But overall we have a lot more to cover but that's just kind of some of the things that we've done recently Nice and is that The state transition the v2 one or the previous version, I believe I believe it's me to one. Oh Cool, great. I am excited to take a look at that Cool Oh, uh Pegasus have y'all done any work or any update? Yeah, I can give an update. So most of you focused on the team building lately So Olivia has joined we've got developer joining in a few weeks time The immediate targets to work on are the BLS implementation seems common theme We we're looking at random the random number generation. We think there's quite a lot of work to be done there And also on a beacon chain implementation Great. Thanks. And how about harmony? Um, we have started to work in the beacon chain implementation about three weeks ago what we have by now is a validator registration contract and It's possible to deposit your own validator and query for deposited validators from transaction receipts and now we We have already made plans on next step is block production on the beacon chain And the block processing part also. So yeah, we're starting to implement like state transition functions block production and so on so That's it for now by the way, we have created a page that It contains all the progress and in closed plans about our implementation and It's it will be possible to get some full request there to see a reference to implementation I guess it might be useful for other implementers. I'll share the link after this meeting Yeah, if anyone any links we talk about if you can pop them into the getter sharding channel I'll aggregate the stuff after the call At chain safe, we've been working on the load star chain Which is our JavaScript implementation of the beacon chain Right now we've been looking at options to implement BLS signatures right now I've been playing with the Milagro crypto libraries available in JS and Trying to use the primitives to build a BN 128 curve to match with the current Python specs We're also exploring the option of possibly compiling from rusts to web assembly for that and Afterwards, you'll be implementing the state transition functions as per the v2 Point one spec And that's pretty much what we've been doing. We started this project as an internal At an internal hackathon we held three weeks ago So progress has been steady That's an update from our team Great, thanks Is there any client I missed? Any research updates, I know there's been various things posted to E3 search That and I know there's some different things going on with some some that you have people Is there any update anybody wants to give right now? I from my side have been working on these on the the recursive Justification fortress rules and I wrote out that post on E3 search a couple of days ago that tries to make a kind of minimal partial spec of the validators the Beacon chain with the four with the fortress rule and justification finalization and validators that changes But it like minimal in the sense that it tries to kind of focus as much as possible as possible on just that and doesn't really focus on Implementation BLS signatures aggregation Shard committees or any of those other details? And the goal of this is basically to just have that be in one place so that it can be It can be a research to analyze for security Hopefully formally proven and so forth Um So a brief on what your design goal is with the rpj Sure, so the design goal. There's a few design goals, right? So one of them is to maintain the basic properties of like safety and whiteness as defined in the Casper FFG spec Another is to make the algorithm Be as kind of as a symbol as possible. So one way in which Casper FFG did not satisfy that is that it Had this weird Mechanism where in order to choose in between between different checkpoints, you have one mechanism But then in order to choose which chain is the longest with it From one within one particular epoch to use some particular some different rule and so that basically doubles the complexity of the thing and Also, it another design goal is what I call stability, which basically means the fork choice is a good prediction of the future fork choice and in general like hybrid fork choice rules are very hard to make stable because It's hard to make sure that the What the whatever's for just we're using on the small scale actually is predictive of the fortress where you're going to use the large scale We're at so recursive proximity to justification basically gets rid of epochs and it Uses the same kind of proximity to justification mechanism to choose between blocks in general and it basically allows any block to be a checkpoint the Another reason why I started going in this direction is that I cared about maximizing resistance to manipulation of the random of the random numbers So basically the idea is that even if the random beacon is total crap So for example, even if the attacker can basically choose the seed from like one billion possibilities then The mechanism should still be secure and it should still even be secure enough that the chain basically never reverts even once they even one single block and Assuming like low network latency and so forth and the re and there's there's two reasons for this one of them is that instead of the random the random sampling being stateless the random sampling chooses permutations and goes through permutations So the total number of slots given to each validator is in the long run going to be the same So you can't manipulate that upward and the other property is that because it uses ghost instead of longest chain as a fortress rule it basically means that the proposers which Were these kind of attackable choke points of chain lengthening before don't really have that much influence anymore and so The chain is basically resisted against even like some up to like 80 to 90 percent of proposers that it Being taken over by the attacker as a long as the majority of the attestors is honest Any more updates from you Vitalik or anyone else on the research team or at large I'll have updates on the 99% fault tolerant stuff that I yell linked in the air to in the Casper telegram soon I'm just having a waiting for a goon to give me some review of a review and comments It'll publish it and in a parallel. I'll think about that more, but that's probably for later Cool Sorry over wise since well at status. We're like pretty new to all the shouting stuff I've been connecting a lot of blog post research notes forum posts Videos about shouting and everything else for a free on 2.0. I post the link in the chat basically it's a repo and If you think something is missing or you want to add the material feel free to put a pull request so Currently there is stuff about shouting Casper Plasma state channels BLS gossip We've a video from ecdc blog posts and stuff like this Awesome. Thank you So in terms of research updates from my side, I've been focusing on the randomness beacon Basically how to instantiate it once we have a VDF and the various security considerations, but also in particular which Specific VDF construction We'd like to use so I've gone through the whole literature and I think my favorite construction right now is by Benjamin Wieselowski Which has a An actual proper VDF in the sense that there's an exponential gap between the the time It takes to compute the function and the time it takes to to verify it one of the the key questions that we're looking at is Because it's based on on on RSA groups. So at least that's one way to instantiate it We need to think about trusted setup Well, how do we pick the RSA modulus basically and one very promising approach is basically to pick relatively small random numbers and Use those random numbers as the the module I for parallel VDF's so the VDF would be kind of composed of sub VDF's each with its own module Modulus and if at least one of the modulus is is safe in the sense that it cannot be factored Then the whole construction is is safe So, you know, it's quite multidisciplinary because I'm I'm talking to the cryptographers and actually we've The foundation is organizing an event where we're inviting the the top the world's top VDF cryptographers to all meet in San Francisco in the in a couple weeks and Almost everyone is attending. So there should be some good stuff from that event but where I'm also talking to to number theorists and Also a very important consideration is the the hardware manufacturing. So the the current plan is basically to build a VDF ASIC, which is a commodity. So which is Basically freely accessible and is given to a lot of people and the this ASIC needs to be close to what a No expense sped attacker can can build himself So the performance of the VDF in terms of speed needs to be very fast and that's to basically counter a couple of attacks that An attacker with a much faster ASIC can do so I'm kind of Talking to a lot of people and tying everything together and I'm hoping that I'll have like a more visibility Towards a full spec and maybe a month and a half or two in about one month, there will be Hopefully a report from hardware specialists that is going to give us a visibility on whether or not Manufacturing a VDF ASIC, which is is fast enough is is doable. So how much How much time will take and considerations like that? Justin you mentioned the research library. Do you have a Like a curated list of reading material to get into this problem space? Yes, I do. So I'm actually preparing one for the event in a couple of weeks and I'll tweet about it and I'll make sure that that you can find it Fantastic. Thanks. Great. Anything any other research updates? Anything from the e-walls and team that might be relevant or some thoughts on sharding as you've kind of been digesting I do have some thoughts about Mainly we've been you've been focused on thinking about cross-shard transactions so Give that for the agenda item about cross-shard transactions If you're inspired to talk about it right now, I think we can move towards that Okay We're planning and building is a little bit unorthodox for most because We start or at least I say we but maybe I don't want to speak for the whole You wasm team, but at least myself Want us Want to focus on face the face to Part of the sharding spec The question is how can you implement how can you prototype phase two before without a phase one? implementation already Built and the answer is well phase one and phase two are are actually decoupled like Justin and and others are saying then the only thing that phase one produces is a bunch of ordered data blobs So these are shard blocks and with cross links between them. So if you just black box phase one kind of what Would be say a JSON a big a big JSON file that defines some it a blobs in an order in a given order and then the phase two prototype would just process these data blobs and and Achieve cross-shard transactions and so that's our hope with With prototyping phase two One of the advantages of doing this in JavaScript a couple of advantages over other Languages that are working on research prototypes like like Python is JavaScript there's already a P library implemented in JavaScript and also JavaScript already has access to a native Engine so I have to mess with any you know Workarounds to get access either libp to P or to a wasm jit engine So yeah, those are the two benefits of prototyping phase two in JavaScript and That's about it. Great. And so you are a little bit more about what you're trying to prototype. You're trying to prototype Access execution like a probabilistic execution engine across but to be able to resolve Cross-shard communication before the cross-links are necessarily processed or what exactly is our missing something No, well, I mean Are you at you you're doing the cross-shard? Execution like through the through the cross-links Sure, well, so I mainly think about it from a delayed state execution Model versus the non-delayed So the cross-links would already be there and At that point, you know the phase two phase two is completely decoupled from phase one So the cross-links are are put in at phase one right not face to So we don't have to worry about phase one We can just black box all the details of BLS signatures of you know putting in cross-links of forming shard blocks We just accept as you know a given here's a bunch of ordered blobs with cross-links. Now. How do you process the? transactions So this is kind of the approach I'm hoping to to take and And be completely ignorant of all the phase one details great While we're on while we're on the cross-shard communication, I know there's the The difference opaque sharding transparent sharding I can't remember which is which but the idea of The users having to deal with this or not. I know that in general right now the thought is to push it out to The application layer to handle dealing with the cross-shard stuff Is there any I know but this is kind of a contentious debate while we're in Berlin? Are there any more thoughts or update on research on cross-shard communication or is that maybe a little further down the line at this point? Okay, we'll pick it up again another time Great so the next thing I threw on just recently is the the v2 one spec Are there any Questions about this at this time. I know I mean if it's As you're working on it if you begin working on it we can answer questions in the in the channel better Yeah, go on the top. I was just going to give the a bit of a warning about that that Some parts of that it's parts of that spec are kind of explicitly provisional in the sense that Like things about how the dynasty change works how the E-pod transitions work and so forth are going to change pretty significantly with if they're recursive Proximity to justification stuff gets included the main reason why I haven't put those things into the specs yet Is because I feel like I don't want to waste a people too much of people's time was like constantly Redesigning you know like we're implementing this in code 10 times of parallels on the defect changes that implemented more in parallel and would prefer to wait a bit more for The new fortress rule stuff to solidify more ideally get like get more of you from different sources before we try to Actually put serious code into it like the parts of that's So given the current spec is there anything that yeah, yeah, so I was Okay, so it's about to kind of go into that so like the things that absolute I think are Absolutely worth working on and number one is obviously aggregate signatures Number two is the general structure that you have this active state. You have a crystallized state Aggregate signatures can be included. There's a bit there's a bit field that keeps track of all these aggregate signatures. There's Probably for now, maybe black box dynasty changes. So I don't have don't really have dynasties change and just have a one single Validator set and Like I feel like at this point if you can get even like half the spec Like even a minimal version of the beacon chain that doesn't implement any of the dynasty stuff doesn't implement any of the Randall stuff And then if you get to that point then probably just focus on the peer-to-peer and try to see if you can make it actually work actually working as a network and the rest of the kind of Ark it's like Protocol structure and details will probably keep getting filled out over the next two months or so Any other thoughts questions on the d21 as it currently is just in general the the one of the big things that happen with d21 was combining the block the beacon chain block attestations and The shard cross-links and so they they are one in the same and also serve as the FFG votes Yeah Well, yeah, so in this spec Well, there is actually three things that better combines together One of them is FFG voting one of them is like small-scale block attestation and those things are like fully combined now with but with our pj and The third thing is the shard cross-link votes The mech the mechanism for like which are what shards are active at any during any particular dynasty and who and Who's assigned to what cross like that that itself might end up going through a couple more redesigns So actually if you're implementing maybe for now like I would even say consider just making it a stub where Or the simplest stop is probably to just say one height corresponds to one particular shard And then if the shard committees end up being really tiny will so be it Yeah, one thing I'd probably add is that the with the way our pj works In the specific case where the number of validators is extremely small. It does open up some other possibilities So some of the other possibilities would basically be that if the valid if that other validators is too small to support One distinct committee at every height then will basically just have the validators that's overlap and for and keep one committee at at every height but for stay at committees to Like be or some validators to be part of multiple committees Those are the kinds of ideas that I want to keep thinking about Okay So The two one there's some good stuff there the bones. I think you're generally in place, but as discussed there's going to be still some changes The next thing is the p2p conforming to p2p messages Trismatic protocol buffers and other p2p relay discussion. Does anybody want to start us off on p2p stuff? Yeah, I can start on that. So I put this on the agenda Basically at at prismatic labs what we're doing is we're using protocol buffers for now because they are Easy to use they you know, they generate the steps that we need for these clients to talk to each other really easily But the problem is that they have unordered fields Which are unordered so that's difficult for for hashing if you're considering like sitting a block across with Order transactions, and it's not going to be may not come out on the same order for another client. So we're You know exploring other alternatives like flat buffers, which you just read the wire protocol like there's no Like reserializing that so we're kind of looking at that. But what I wanted to bring up is can we Start early on agreeing on some kind of schema Preferably something that's they somewhat well supported and can generate some kind of code for for most of the languages We want to use So just wanted to hear if anybody had any thoughts on you know preferences or or anything like that on conforming to messages early so that When we want to test it out we can already start communicating Do you have any prototype for a message like maybe you have some somewhere in the Some public page Yeah, so in our Prism repository we have our our proto messages to find When this is what our clients are using to talk to each other At the moment so if you want to look and kind of see how we're using like how the scheme is defined and like how that's used It's in our repository Yeah, I think that would be a good point for every one of us to start from like a discussion of You know scheme of Message form and I think you know what what it was before is we sort of just agreed on The fields and sort of what order they were in and that was just kind of like written down And then we all implemented that I'm kind of hoping for a theory to we have something a little bit like smarter and easier to use Hello I've taken took a look about the The portal of stuff and I'm not sure if I'm right. Is that the the term ministry take Serialization only When we use when we are using the data structure map and that's When this problem Happens About ordering. Yeah, so the problem. The problem is that when when one client and one language or one implementation serializes this into Regarding the ordering problem like the the protobuf spec if you look at the wire protocol itself, it doesn't define the order its base Yeah, it's I mean it's unordered By definition Yeah, yeah, but also the spec the way it works is that if you're reading stream of protobuf You're supposed to look at the last value for for every potential key like this is a feature of protobuf that you can basically append a protobuf blob on At the end of something and you're supposed to get the last value back from your parser if you're a conforming Protobuf parser so that there's a lot of these little Extras that that make it difficult to use in In a hashing setting what could possibly work is a stripped down version of probe off where where you just pick a few features and Prohibit others a little bit like he was and where floats are no no Regarding that, I think Well flat buffer from Google was an evolution in that direction and also captain proto Or captain proto, which was it's not from Google, but it was from the guy who implemented the protobuf at Google and left the company later I have a question so Prostatic labs are using the protobuf for P2P and how about How do you are you using for the serialization of the database like how do you store? the block in Which serialization are using for encoding the block data? Interrupt so go ahead My question is if we are using different Serialization of the data storage and P2P serialization who did Close that we have to do to serialization when thinking the blocks. I mean we'll have we'll have to Serialize the block from database and then send to the peers who are asking Block from you. Oh, okay. Yeah, we we still use proto for serialization and storing it inside of the instead of the level BB so Specifically, you know, we right now. We also serialize the active state and this in the crystallized state with protobufs We use protobufs for basically all process communication And our sharding clan runs a separate process and connects to a big container via gRPC So Yeah, we serialize everything that we talked that that process talked to each other with through protos And we currently serialize like I said the active state and the end of the block data With proto dot Marshall and these methods and then we store them in local storage One question like why does the crystallized state even need any special serialization when you could just like Active values together You mean just like get the get the bytes out of that. So we just yeah Oh, so we because we were communicating the crystallized state between processes. So We we created we created a proto for it. That's why yeah, otherwise, we would just you know I just we just need to get the bytes from it. Yeah, that was just like a wrapper a container for for that Guys I have a question for you I'm interested in how Signature aggregation stuff will look like from network point of view. I Mean how many messages it will be required to send for example to sound to a test the block Did you do any experiments in that we have it in the network? now Yeah, go ahead. I was going to say I cannot much yet though You can kind of do the math and see how much you'll end up having to download Probably the main well the one kind of bottleneck that I do see right is that you have All of these validators that are constantly publishing all this here. We have a lot of Messages that then all get aggregated every eight seconds or whatever into this so one single thing so That does seem like the sort of thing that could easily benefit from a separate peer-to-peer network but as aside from that I feel like The best one of the And the other number that we have is obviously all the different estimates for actually how many validators will end up We'll end up participating and how many will be participating during each epoch if Doing it the naive way of just everyone broadcast of a proposed or aggregates is too hard then There is a scheme for doing it hierarchically that could be that could be considered We are basically you would have No, it's at the network just randomly choose to specialize in a particular slice of No, the addresses that then aggregate for that and then they broadcast that to the proposed Or rebroadcast that and then that's what the proposals try to download in addition to the hierarchical strategy you could also think of a kind of Random path strategy where you you keep on tagging on your own signature and kind of the The gradually aggregating signature travels through the whole network. Well, that doesn't work because that takes so a long time like We I think like if we want to get the block times to be reasonably efficient then we need something that takes that Like two to three rounds of network communication Which basically by itself implies like a fan in of either square root of n or two root of n Right, I guess you could have a hybrid between between the right Okay, thanks guys maybe a prismatic laps already did something like that or Did some experiments in signature location at working? So we're actually in that process right now We're at the moment just setting up the entire communication between the shouting client and the beacon node and getting Getting a testers and kind of proposes to figure out when they have to perform the responsibilities So that's entailing, you know doing some just like, you know fetching fetching proto data from from the beacon node and Aside from that. Yeah, we haven't really even started exactly on the signature aggregation and the downloading of that So was there any consensus on the proto buff stuff or is that something we still need to talk about later? um Vitalik, what do you think of? using the same pose your relation in the big country pole as RLP replacement This one by the way guys, what's wrong with RLP any thoughts on RLP? Too complicated. Yeah, so our our proposals not necessarily like get away from RLP, but to have some like generative Schema that we can all sort of conform to and if there's something faster that we can use like RLP is not very fast. That would be great Yeah, yeah, I'll concur there I've been missing a schema for all RLP. Otherwise any schema language goes CBOR. I saw Alex proposing and Proto buff is one, but we would have to got it There's being code from BitTorrent They have a scheme as well, I think Okay, I see scheme less is like a really good point for To move from RLP to some something that you have a schema. Yeah Got it. Thank you Okay, well You know if people want to dig into this a little bit over the next couple weeks and we can talk about it again next time Hopefully we can figure out a Any coding that works for us Just open a Kind of Fred on a forum research forum about that Yeah, I think that'd be a good place to to discuss it Also for other teams that are implementing this and probably you know experiment to import protobus or flat buffers would be cool to Try if you can try to communicate with our client the RPC and see you know see if you can break Break any of these encodings You guys using gossip sub, right? Yeah, right now it's all locally networked through MDNS though So but we're we're we're quickly gonna start exploring the other discovery schemes like DHT's Yeah, okay, well, I don't know if that's a topic for later, but we might want to try and pick Um, you know pick some some p2p network, but we don't have a gossip sub in Ross yet So it's gonna like add some overheads from our perspective before we can talk to you You got it Yeah, is there um I Know there's been a few people on the research team that's been doing some Keep digging into various p2p constructions for sharding is that something we want to talk about right now? Or do the research that y'all have been working on is that in form the beacon chain? Or is that more for actually when we have the shard chain? And are these two separate kind of p2p constructions? Sorry the beacon the beacon chain and what what other chain the so the are we gonna be using the same p2p setup for the beacon chain and also the shard chain There's no one shard chain. You mean all of the shard chains shard chains. Yes plural So the main difference is right that the beacon chain is for everyone, but the shard chains are only for well The shard chain headers are for everyone But the shard chains are for all the whatever subset of nodes cares about them And it would be a pretty serious loss of efficiency to have them all be in one peer-to-peer network so the Like we do need I think some kind of Well, and I know there's already work being done on a shorter peer-to-peer network Which is what it makes sense to put the shard chains on to and then the beacon chain Like galea styphics should be on some kind of layer that just everyone downloads by defaults, right? Any of us working on p2p you want to give us some thoughts Preston do you want to share some of the ideas that we had about beacon p2p? I I don't have anything particularly interesting to talk about I think we just I think we just had all the beacon nodes be on like shard negative one or something and And basically, you know have yeah have have a network for all the different shards So, okay, so using using the same construction that has topics and just having the beacon chain stuff be on its own topic Right. Yeah, I think that made sense for us. I think yeah, all beacon nodes are gonna be on yeah Like shard negative one or some you know some construct like that right Actually in my in our original Thoughts the beacon chain and other mention messages Which is global to the nodes can be? can be served as Global gossip channel For each topic, I mean for each like for the mention header. It can be a global It can be a topic and subscribe by everyone. So In this way yeah, and So that way they are and the the channel They are all in the same network but segregated by the topics Right, I guess one question is how many topics her shot do we want? it might make sense to have one for the headers one for on sign blocks and Kind of unaggregated signatures and then one wave With you know the the fully signed and aggregated blocks So I guess you can dive into whatever level of granularity you want just by subscribing to the appropriate channels Does the number of channels or increase the number of channels impact on Network amplification rate, but you will only receive I mean If you want to broadcast a message you will only broadcast to the peers who subscribe to the topic so We can design something like if we receive the messages from which We're not subscribing so we can Bend appear Something like that. So you will not be affected by other topics Yeah, but if you're if you're reading another one channel or I it will impact As I suppose it's gonna have an impact on discovery mechanism of that network So, yeah Maybe this impact is really small and we don't need to care about that and but it might be that it's not too small, you know That's what I'm thinking about Yeah, I think that it was worth testing and similar thing to see The impact of this I mean Yeah Yeah, okay I think one strategy to mitigate this is to have a common discovery layer for all the channels and Then have kind of the gossip layer beyond top of the discovery layer Do you mean? We already have a sharp reference Channel or the discovery or I mean, yeah, so that could be one way to do it where you have one Kind of meta channel where people Tell other people about which other channels they're subscribed to Yeah so we currently have a Global channel for to do that. We also We're also is exploring other discovery protocols like Like the random food protocol They proposed by the B2B, but currently it seems not so Safe, but I think that were Exploring and testing Okay, so there's a getter channel that was opened up. I think this week Where people are discussing these things and working on some proof of concept implementations and testing in some a simulation So it seems kind of like an ongoing discussion So pop in there if you have some thoughts or if you want to work on that Is anything P2P related that we want to discuss before discussing the BLS signatures As far as an hour This probably is gonna run on For longer, but if you have to go you have to go Regarding a leap to pee if we don't have any because there is no leap to pee implementation in either C or C++ It's not worth it right now to try to implement it from scratch as long as Because there is no go like we will go in this direction, right? Yes, I think so. I mean We are for our team we're using Python so we're We will make the The layer in mostly in go and We will probably use We will choose one of the method like Python bindings or Entirely used using the IPC or RPC to communicate from the Python and the goal, so I mean because they only have the go link and draw screen so So to answer your question What's the question are we not we don't have consensus on it, so it's probably not worth implementing in them yet Yeah, that was a question. I That seems to be my understanding of it in that You know if you want to start digging into it and working on an implementation that makes that you know That'll be a personal decision, but I don't think that we have enough testing to say for sure that that's what we're gonna be using at this point Okay perfect, thank you got it You want to move on to the next topic which is feel a signature standard libraries? There were a bunch of links Posted I know that's kind of one of the big things teams have been realizing is that there's not necessarily great standard libraries and all these different languages Well for BN 128 there's standard libraries and at least some and like we kind of helped standardize them because we forced We put it as a pre-compile for by zantium right so the one thing that I think is Might end up being tricky is that if we intend to migrate from the end 128 to BLS 12 381 then And I'm still not sure yet. Like what level of difficulty it'll involve to switch all these libraries over like if Whether it's a five line code change whether it's something more substantial. What's the benefit of changing the curve? Basically, it's got a higher security parameter Like Zcash is changing it basically well For Okay, I guess there's two reasons right one of them is that the curve has a higher security margin So it goes up from something like a hundred and a couple and a couple of bits to the full hundred twenty eight bits and Another argument is that in the future with Zcash and a bunch of other projects switching to BLS Well, 3d1 like this looks like this will be kind of the curve that people are going to standardize around for some time And so with that in mind, it's worth it to kind of go with the flow. I guess Yeah, it's also a curve that Chia is intending to use. Yeah Um, the chances of finding another curve I would say with the effort of standardization that's been going in like another curve would have to have really substantial advantages which would basically mean some kind of discovery of something broken in BLS box 3d1 which seems like I mean it could happen, but it's Intuitively seems to be relatively low probability Hmm, I guess the one I guess one property that a new curve could have that would be better That would make it better than BLS 12 3d1 without BLS 12 3d1 being completely broken is if the new curve actually pointed to a pair of curves which where one was the modulus of the Of the of the other and the other was a curve and the other was the curve order of the other because that would be really Nice for ZK snarks, but that's and I guess it's plausible that people find that in like five Five years or whatever though even in that case like there would be no reason for us to switch and there would be no reason for any application Other than ZK snarks to switch Thanks I mean what further discussion do you all want to have about this and that there's there's a lot of implementations that someone referenced in the Good hub link Is there What is what is the process for standardizing these libraries and their work that needs to be done? Any other thoughts on BLS signatures? I Guess like for myself personally to be able to update my Python at the end 128 library to BLS. I would just Realistically needed like either like what all the parameters are and what kind of curve Possibly like possibly like a couple of hours of hand-holding by some Cryptographer who's deeply involved in this would get it get it done extremely quickly Then another thing What's the outlook for a fully audited and Let's say low-level see API kind of reference implementation of this curve So The rust implementation which is being spearheaded by Zcash is Well has been audited by And also the the abstract specification of the curve has also been audited by a different security company and the the rust library is relatively mature in the sense that It's been worked on for for many years And and you know has been audited Does it have aggregates in it at this stage? it's it's mostly for the for the base layer Curve operations, but the aggregation is is pretty trivial on top of that you know Like aggregation of VLS signatures really is trivial. Let's just point the ditch in Yeah, we kind of hack together our one just just off your one Vitalik But um, it's like as bad as safe as broken glass in my opinion I was I've been like maybe photographer if you go across it. Yep. Totally. Mmm. I mean I did talk to the end Benet about the algorithms in the Python and Like he said that the hash the the hash to g2 function. I created a spine he We also talked about like issues around rogue key attacks And I think our preferred technique for dealing with them is this proof of possession at deposit time Which is not currently implemented yet, but it's fairly trivial to implement Are you thinking of doing that on the proof of work chain or separate? I'd say just do it on the beacon chain like basically My own philosophy would be to try to do as a little on the proof of work chain as possible because that makes it as a little work as possible to Migrate everything from the proof of work chain to the shard chains when the time to The time for that comes Okay, cool. Just a like a mild note too that I was um when I was going through that that reference implementation I got ended up down a rabbit hole finding that road key attack thing So I don't know might be worth throwing into that reference implementation Just a note about the road key attack and how the current implementation is vulnerable to that Sure, and I could probably just add a proof of possession method in there Yeah, and it's it's at least referenced in the v2 one spec, but it's still implied that it would happen on the proof of work chain If we move in that direction the proof of work chain deposit contract might just become a burn and everything else Everything including the proof of possession would be validated on the beacon chain And I think I think it does make sense Just because eventually when people are coming in from the shard chains, they're gonna have to be Like depositing from the shard chains. They're gonna have to be doing similar things. So it would make the Flow the same Right. So from a research perspective, I think it's clear that We want to do as much as possible on the beacon chain one question that remains is how do we do the bootstrapping of the initial validator set Because you need validators to process transactions to On-board new validators in the beginning you wouldn't have any validators But yeah, it's possible. I'll write a research post on that soon and One small note on the rust. So in addition to being audited it's it's very performant and the the one kind of known downside of it is that it's It's not Constant time crypto. So it's possible that it's vulnerable to time and Which could leak private information But though one thing I think that's important to add is that the computations that are private aren't that need privacy protection aren't the pairings. They're just verification. They are the elliptic curve multiplications and Knowing how to make elliptic curve multiplications privacy preserving is something that there's been like decades of research on so I don't really see a fundamental obstacle to Or any reason why it should be more difficult than before Right, it just hasn't been done yet Yeah, one general note about rust would be that the platform support is sometimes lacking for more exotic platforms, but That can be solved in other ways as well That's why See limitation tends to be the lowest common denominator is that we can put on basically any hardware out there and just tweak small parts whereas With rust you might have to I don't know update the compiler Our comments on BLS signatures currently, okay Road actual items for clients and research. I don't really think we're necessarily at that point. I think that there's a lot of Separate efforts going on and you know what you need to be working on And in terms of timing for future meetings Was is this time reasonable? Thursdays at 2 p.m. UTC Every other week Sorry, Danny. Thank you. Did you skip the current state of cross shard communication research? So I thought maybe we're gonna I thought maybe what was said earlier was What people wanted to say, but yeah, that was more I guess we were talking more about your stuff at the time, but is there People want to talk about cross-shard communication Research or thoughts or anything at this point Maybe the person who raises the agenda. I don't want to speak about it So the current V2 spec doesn't really go through Cross-shading Communication as much so I was wondering if the research team has done any More formalized work on it or if they have any initial ideas Yeah, so it's worth noting that the V2 spec doesn't even cover the state transition function at all, right? So it's the purely at the data level at this point And in terms of how to actually run the stage like make the stage transitions I think like they're all over the very I see the research posts on like cross-shard transactions and yanking and so forth And I think and I guess some of the research into the synchronous stuff and that's basically the extent of it at this point One thing I'd like clarification on from the roadmap architects is if it doesn't make sense Like I'm hoping to do is to prototype Phase two execution engine, you know in a way where it's decoupled from phase one And we can just black box phase one ignore all the details of phase one Take as given, you know some ordered set of shard blocks some unordered set of data blobs with cross links and Just prototype phase two. Are they sufficiently decoupled that this? I would say though why like if there were really to be decoupled then One kind of thing that that would force is that would mean that execution and Data consensus would actually have to be separate which would mean the blocks It would not contain state routes and there would be separate separate processes for agreeing on state routes and so on Which like we could do if that's what we want if that's what we wanted to do That's delayed state execution Yeah, like basically I think like if we make an agreement that like we're We're doing delayed state execution and the two are fully decoupled if like we decide that we're doing state execution at the same time as Data consensus, so basically the current the current model then like the two are coupled What what do you mean current model? As in like the theorem 1.0 model where blocks have state routes, okay I mean one of the things we're considering is Not shuffling the the proposes very often In the in the charts and that would be specifically so that they don't have to incur the cost of Sinking the state But in a stateless execution model, then you don't have that issue either necessarily except for maybe, you know the cashing organizations Yes, so that's Yeah, in general, I think they the phase two, you know the execution engine and the Problem of cross-shard communication cross-shard transactions is relatively understudied Compared to the details of of the consensus protocol, so I'm hoping no Yes spike more interest in In addressing this problem, you know orthogonally and decoupled from from phase one and also even the even the names phase one and phase two gives the impression that You can't start working on phase two and still after phase one is already built So I was hoping to you know clarify that That we can start working on phase two and we're kind of in parallel a while You know other people figure out all the details of phase one. We need to be thinking about this problem I'm Excited to see what you have going on. Can you keep us updated in E3 search or on the shouting channel? Yeah, certainly it's a little slow going For us the e-wasm team because our top priority is the e-wasm test net So we're spread a little thin and trying to work on a phase two prototype and launch the e-wasm test net but We expect to pick up the pace once Once the test net is closer to launch Is there anything else anyone's talk about before we close this first meeting? Just Sorry after you very good latency there Just wonder if there any thoughts about having a get-together workshop perhaps around DevCon during or associated with it If anyone's got any thoughts or plans around that be good to know I Think there were plans potentially to have a shouting event kind of immediately before Or immediately after DevCon I think that would make sense So that people don't have to travel Everyone's there So from status we're hosting a hackathon just the days before We could perhaps use that venue in Prague in Prague. Yeah the days before This is something I need to check with the rest of the team however, whether we'll have room for it But but it's a possibility that that I can investigate Okay, I Think there's probably general interest I imagine yes in terms of organizational resources will be tapped. So someone wants to take the lead on it Status or otherwise just kind of fill everyone in and see what the Consensus is on wanting to do it. I would venture to say it I mean there might be some sort of breakout session around charting or something during DevCon but before after probably be more appropriate for a more in-depth session Well, that's great fancy update. I just wanted to plant that seed. I think Paul and maybe Raul had something to say Yeah, I just wanted to touch brief you on the get shuffling function I know before it had a little infinite loop thing that it did I'm just I'm just quickly looking at it now. It kind of looks like it might still do that Is that the case? The which shuffling function we talked about validator shuffling or something Get shuffling in in pie. Mm-hmm. Yeah Like it has to have a that look loop that basically is no So the it is the case that like there's no There's no upper bound on the number of steps it can take but it's a like it's a it's a probabilistic algorithm And there is a very sharp probabilistic bounds on it Like basically the problem is that in order to get an unbiased random number from Asia from a random number That's larger than the than the first number by something that's not a multiple There is like basically not a way no way to do it without taking on a risk of or without like throwing data away at least some of the time so Like that basically I don't think that's a going to change but that's also like not something that's dangerous Okay, sure. I'll check it out. Thanks Yeah, I remember running into something to Paul we can go check it out. Thanks to tell and Raul I keep seeing you on mute. You ready? Yeah Yeah, I just want to talk about maybe having a shared repo for like testing infrastructure Maybe like storing the contracts that we all have You know, we could all other everyone on the call could be made a contributor to that repo and then you know, we could all just keep that as a shared place Just to make sure that everyone's on the same page with regards to like the RC stuff And open up issues for possibly like coming up with a shared testing infrastructure Kind of like how we currently have an f1.0 for the different teams Yeah, I know that we're talking about having shared repos and everything but we'll be cool to just kind of push forward on that more I agree especially on the testing front. Do you think that we're ready to begin with some shared testing? No, not yet. I just wanted to bring that up Okay We'll either get something together in the next couple weeks or maybe right after the next call Okay, anything else sharding related or anything else anyone has anything to say Thank you everyone for coming I think that was at least mildly productive We'll plan on meeting two weeks from today at the same time. I'll probably just start making the meetings Scheduling them for an hour and a half. We break early. That's fine. I think we'll generally a lot to talk about what to do Keep following each research If you have any questions Comments discussion pop into these getter channels Or reach out to us directly or whoever might be able to answer that um cool So one thing I forgot to mention is that tomorrow with get coin. I will be doing a small presentation on VDF's and then there will be time for kind of sharding AMA if anyone wants to join I'll be tweeting about about it soon What platform will that be on? Get coin I think it will be a zoom meeting Cool, thank you everyone. Thank you everyone Thank you