 Yeah, no, so it's, hold on, import time, right? Something like 0.01 seconds in the Python implementation and we already know Python is corpably slow. And multiply G2 and, okay, so it actually takes about the same time as a BLS multiple key, or not as BLS, sorry, as a multiplication of G2 points. So, and then pairings definitely take considerably longer than that, so I only expected to maybe add a few percent. What would be interesting to get numbers though still? And Lee, yeah. Yes, I mean, in my benchmarks for Milagro library in Java it takes 0.2 millisecond. Why firing takes 10 milliseconds? Yeah, that sounds about right. We want to move on to the other client update and dig into this later if you want to. Sure. All right, cool. Yeah, let's continue. Who's next? Hello. We've been working on the station logic and now somewhere in the middle of the progress. Also, there were several small fixes that reflects latest spec changes, which started to appear pretty fast. And one thing that the worst dimension is that we have implemented a multi-validator service that carries as much validators as GVM could hold. And this is a pretty useful thing in terms of testing or in benchmarking, because when we will get to BLS logic implemented, we can like have a thousand validators and measure the time of the signature aggregation and proposing the same thing. This time won't include a network latency, but basically a network latency could be emulated. So yeah, pretty useful thing for benchmarking. So that's it now, Syed. How about load start? So we're almost done our implementation of simple serialize. We're going to start testing soon and packaging it up into an MPI module. So expect that within before DEF CON. Also, we're going to start writing a simulation for certain functions in the beacon chain spec to just get a deeper understanding of them before we continue implementing. And that's it on our end in terms of updates. Cool, thanks. How about Parity? So we are building a shots for implementation using the sub-street framework. So far, we implemented the state transition and that's our calling to the current Python reference implementation at least two weeks ago. So we also have some basics for the networking transaction pool, et cetera. So far, it looks good for us. So I mean, the sub-street is meant to be a general blockchain framework and we indeed get some benefits using that. We don't need to rewrite our blogs in PowerLogic or the beacon start. So that's why it's slightly faster. And for next, I think we might be focusing on implementing some of the features in sub-street that might be potential blocker for Shaspard. So there are basically two things we found. The first is, of course, plug-or-consensus. We haven't got the work twice through. We implemented according to the Shaspard spec. And second is the support of multiple start routes. So we don't need that yet now, but we will need that once we get the validator, the validator try or something like that. So yeah, that's basically for us. Frederick, do I have anything to add? No, probably, no. No. Okay. Thank you. How about Lighthouse? Yep. We've been focusing on the block processing and block pre-processing and keeping up with a lot of the spec changes. We've also drafted the simple serialized spec and built some YAML tests for some shuffling and a list of things that we have there. We haven't actually released those yet. We've also been looking at doing the networking side of our clients. So that's looking at gossip sub in Rust and we kind of started going down the path of implementing gossip sub in Rust is going to be our next kind of little venture. Yeah, that's about from us. Cool. And Prismatic. Okay, guys, we did our little proof concept demo release in which we basically allow a single validator and like the network simulator to advance the big contain based on state transitions. And we relaxed a few of the constraints and some parameters to allow this to happen. So that was a big milestone for us. Another thing that we're working on right now is we're working on getting DLS integrated. So between all the signature aggregation stuff, we're dealing with some issues with finding a good Go library that has a permissive license and also has everything we need for 12381. Aside from that, we're also implementing the foretold rule at the moment where we're using basically the last finalized slot, last justified slot and current block slot as a weighting factors in the scoring rule. In the meantime, while we wait for settling on immediate message goals or like LMDs. So just wondering from the research team, are you guys leaning more now away from immediate message goals? Still thinking and need to think about this more, but definitely looking at latest message. And I also made an implementation of latest message in the beacon nose, in the clock disparity folder of the research repo. You're all just like that again. It's surprisingly simpler than immediate message ghost in certain ways. I see, cool. Thank you. Yeah, aside from that, that's mostly what we're working on, just getting all the BLS stuff in. We're going to be starting work on simple serialize as well very soon. Cool. Namda. So in terms of updates, we were more focused on low level robustness and high level achievements in the past two weeks. Still, we have implemented the sparse make up miracle trees in him. And benchmarks should be coming, I hope in the next two weeks. It was done by someone outside of stairs through bounties. So the timeline is not decided yet. There's one thing we are concerned about. It's about one of the goal of status is to support all devices, mobile and desktop for the last five years. And currently, lead P2P requires a unique domain socket. So that means that it only works on the latest Windows 10. And for example, it won't work on Windows eight or something. So we are challenging as a lead P2P team on that. Also, we hardened the simple serialize or simple serialize implementation with regard to alignment and made the ESSEC made some proposition about padding. And hopefully if we get some kind of test format decided today, we can start working on a tester generator so that we can test all implementations. Another thing is that it was more for EVM wine point, wine point O, but we had some lessons learned while dealing with unsigned ints and signed ints in EVM wine point O. And I hope to give some constructive feedback when we reach the EVM 2.0 design about that. And our focus for the following two weeks before Prague would be to create several benchmarks so that when at least everyone from status meets, we can test the 2.0 on phones, routers, Raspberry Pi and have some comparative points compared to our regular desktops. And as a side note, a bit of topic, status has been moving away from Slack to full whisper slash status desktop and we will be developing a whisper and a Gitter Bridge so to live completely on the blockchain. Right. Regarding the sparse merkle tree is, I don't know if people saw, but I made a sample implementation for how to optimize them by basically layering hex protrusion trees on top of them so you get the same level of database efficiency. In case, I don't know, that might be something that other people... Vitaly, where can I see it? I just said, there. Oh, is it there? Okay. The sparse. Sorry, my audio was messed up for a second. What was the question? I was just talking... Using the patrician... The question was where I could see it and the link has been posted just now into this chat. Thank you. Okay. Cool. Sorry, I've been a little bit absent. I'm giving up on YouTube for now and I'm recording locally. Cool. Now, did I miss any client? Any updates from research? I guess in general on the spec, there has been, as we've moved to maintaining it primarily in the GitHub, there's been a lot of rapid development. A lot has focused on cleaning things up, making things clear, making minor adjustments. There have been some additions, definitely, but a lot of it has just been a lot of reorganization and renaming and making things cleaner for people that are reading the spec for the first time. Other than that, what's going on in research? On the spec side, one of the kind of bigger sites that's probably worth dedicating more time to specifically is replacing, the possibility of replacing the hash algorithm with some kind of mergel hashing. And I made an issue about on this, I think it's issue number 54. So, would be good to try to get more feedback or comments about this. Basically, the idea is that you add a kind of mergel root. Like basically, instead of hashing everything, you kind of hash the object in a mergel tree and the mergel tree hashing is sort of done along the lines of the objects sent to X tree itself because that makes things simpler in a bunch of ways. And as part of that, we would probably merge the crystallized state and the active state. Yeah. Any other comments on that or more research update? On the randomness beacon, there was a nice little improvement to Randall that was found. So basically, it's about hardening Randall against often reveals. So an often reveal is when someone makes reveals their Randall commitment, but for some reason or another it doesn't go on chain. It could be latency or it could be active censorship. And the problem with that we've often reveals is that when the revealer is invited to reveal the next time, everyone already knows what they're going to reveal. So it's as if they had skipped that slot. So the simple solution is just to count the number of times that a revealer was invited to reveal, but no block proposal made it into the canonical chain. And then when the revealer does eventually reveal, then they reveal n plus one layers deep where n is the number of times that they didn't reveal on chain. Another nice piece of progress for the VDF is that Filecoin has confirmed that they are collaborating with us on a 50-50 basis for the financial side for the various studies. So we have three studies that we're looking to do and we're starting with analog performance studies. So basically seeing how much performance can be squeezed by designing custom cells at the transistor level. There's also been some progress on the state of the art of modular squaring circuits. So the Chinese team that we're in discussions with reduced from seven nanoseconds down to 5.7 nanoseconds. So that's a 20% improvement and they did that over the Chinese holidays. I've also been spending some time thinking about how we would organize the circuit competition. So we want to invite anyone in the whole world to participate in some sort of circuit competition to find the fastest circuit. You know, there would be a large bounty to a competition. But one of the complications is in terms of which cell library do we use for benchmarking all the various circuits. So generally the cell libraries have a lot of IP protections and the vendors are not very keen to work with open source projects. Luckily I found, so there's this seven nanometer predictive PDK called ASAP7, which has been developed by academics and which is open source, at least for academics. So we're talking with them to have it to be able to use it for everyone in our competition. I mean, the other nice thing about this library is that it's a FinFET library and we want to design a FinFET ASIC to have high performance. In terms of the spec, I guess I will gradually be spending more time on the spec. And just as a heads up, I think the pace of change to the spec will continue or maybe even accelerate in the near future, maybe all the way through the end of 2018. Can you give some insight into that? Is that like radical changes to implementation or radical changes to the organization of the way that information is presented? Yeah, so I mean, I definitely agree with the colleague that the research has stabilized and all the key open questions have been fought through, but there are lots of detail that needs to be written down and the existing detail also has to be tested. There's also some not so insignificant changes. For example, what was talked about where we merged the crystallized states and the active states, that's a relatively large change. One of the things that I've started doing is adding a to-do list in the spec itself. I mean, that to-do is incomplete, but it gives you an idea of the things to do. I mean, one of my goals when outside of the content of the spec is just the presentation and the readability. So one of the things I'd like to do when I have more free time is write something very similar to what I wrote maybe six months ago on e-free search for the, what was called phase one, where there's lots of clear definitions and everything is very polished. So I'm hoping that we'll get to that state maybe in 2018 or early 2019. Thank you. Justin? Yes? Sorry about that. I was just wondering, you said there was some kind of increase in terms of the time it takes to do some spreading operations. So does that mean that you're working, this was done with a specific field in mind and was there any kind of, any kind of, how to say, theoretical advance or is it just an optimization that happened? Yeah, so I don't have lots of visibility into how the Chinese team improved the latency. I imagine it's all optimizations. So one of the things that these hardware multipliers have is these large reduction trees and you can try and be clever in the way you organize the cells into what I call compressor modules. And then how you arrange the compressor modules to reduce the critical path of the circuit. And I imagine they did some hand optimizations for that, but I don't have that much visibility. The specific group that we work again is just an RSA group. So it's a 2000 bit modulus and we're just doing squaring in that group. So it's just modulus squaring where the modulus is fixed and it has size 2048 bits. Anything else from research right now? Any updates from you, Alvin? Oh, Kevin, thank you. Sorry, yeah. From the strategy, we merged Python and GoLogics in PyVee and POC recall. And since the CAD, DHT, and other functionalities are done in the P2P Dima, so we're more actively testing the P2P Dima and working on the Python finding square it. And we currently, the deployment script in Ansible and Terraform on the way. Yeah, and that's our update. Cool, and I know Xiaowei has been porting a lot of the Python proof of concept stuff into IAVM to start working on a more production Python implementation. And I know she's made a lot of headway there. I kind of skipped that earlier with client updates. Very cool. Raul from LibP2P couldn't be here, but he has a handful of updates in the agenda that he put there. I'll grab this real quick. The LibP2P daemon slash binding interface spec. They've approved and merged initial spec for LibP2P daemon. They've developed a Go binding implementation that adheres to the above spec. And it's a continuous evolution. He loves any feedback. Asking about when the ETH2O teams are planning to start work on Python, NEM and Java binding for LibP2P daemon. They'll have to get ballparks so they can line up support. For DHT, DHT support is now merged into the daemon. We use the LibP2P IPFS, we've shot peers by default, but you can pass different settings through grand line options. Spec review and update. Some of us LibP2P folks are huddling up the suite to review specs and bringing it up to speed with implementation. Watch full requests on LibP2P specs review. And Mike and Raul will be attending the ETH2O meetup October 29th in Prague. If any of that was relevant to you, check it out on the agenda and respond to Raul there. Cool. Next on the agenda, testing. We've made some efforts to define a general format for testing using YAML. It opened up a can of worms into how you actually structure those tests and where those tests live in the format of those tests. It's something I definitely want to talk about in Prague because I think we'll have Dmitry there from testing and other people who have been in the weeds with the Ethereum 1.0 testing infrastructure. But if people have comments and would like to discuss some of that today, feel free to chime in right now. Yeah, cool. Yeah, I think the best way forward is just to say, well, let's start with that on a small scope like simple serialized and see how it goes from there because otherwise we're just talking without any physical, well, substantive stuff. Right, yeah, I think it's probably reasonable to use it for SSD and maybe like the shuffling algorithm and some of these little things, not directly, not like big chain tests and stuff. So in that sense, I think I'm comfortable moving forward and then continuing the conversation and hopefully making some firm decisions in progress near Prague. Testing going once, going twice, any other comments? Okay, Alexi's alternative tree storage structures from last time we didn't get a chance to talk about it. Alexi, would you like to talk about that today? Okay, I kind of changed my intention slightly but I will tell you what I think now because I've done a lot of research since last meeting. So I have actually quickly reviewed what Vitalik posted in the, I actually was aware of this optimization before. I think I kind of, Vitalik probably presented in June in our meetup. At least I thought I heard it there for the first time and I also thought about using the different types of tree for actually storing things in the database and this is where my current research is going. So essentially I have this while working on Turbogas I'm trying to see where the basically suboptimality of the trade-offs go so that I am talking about the trade-off between update efficiency of the storage, between access efficiency of the access in the storage and the storage efficiency in terms of our size of the database. So there's three things and I'm starting to believe that we are very far from the optimal set of trade-offs in terms of how we've done things so far. And some of it to do with the, with just the choice of the databases that we use and stuff like this. But basically my idea currently is I'm doing a proof of concept which I'm hoping to finish in about three weeks but maybe the next meeting I can give you more data so that will be more convincing than the words but the main stages of this proof of concept so the idea is to create sort of to fuse together the like some sort of will not be treat based but basically the database key value database with the temporal element so that you can store the history something that is kind of lacking in Turbogas in terms of efficiency. Then with the native support for some sort of tree hashing algorithms and also with the easy ease of pruning. So for that there's like four stages of this proof concept I'm currently on the stage two. So the first stage is implement weight balance trees with tight as possible balance parameters and made it recursive, non-recursive so that there's efficient bulk updates. So I've done this already and there are a couple of lessons I've learned but I'm not gonna do it right now. So there's another stage two which I'm working now is that basically if you take this binary search trees and then you start grouping the sub trees like little fragments together as the title described in his article for the sort of you group them together so that they fit in the page like 400 so sorry four kilobyte page or something so for efficiency of storage and you then use some sort of split and merge page split and merge operations in order to maintain this page size. And so the at the end of the stage I want to see how much as a storage efficiency I'm gonna get an IO efficiency when you mutate this kind of page tree and I would use the to pretend that I'm mutating Ethereum state from the main net for example. Stage three is the maintaining the prunable history of changes which basically introduces a temporal element to it so that not only I'm gonna store the tree of current state but also the history but again the lesson I learned from the tour baguettes is that when you store the history in such a way that you record the updates relative to the past to the like a one block in the past for example so it's like what I call forward delta then it becomes really difficult to prune such a structure but if you start recording the updates the other way so that you always record the reverse diff from the current so therefore your past records are always referencing the future records so the pruning becomes pretty much trivial and then obviously the efficiency of access that I will only to measure and I hope to finish the stage three in about a couple of weeks but maybe it's too ambitious it's just basically get some numbers around and the stage four which is the most interesting for this kind of call is that again similar to the italics idea is the same code embedding of the different tree hashing algorithm into the database so essentially you can use the Patricia trees or sparse Markle trees or IVO trees maybe and kind of in the same place as the WBT which is the weight balance trees that I'm using at the moment and so you can you try to record one hash per page to assist kind of confusing the hashes which is again the current the turbo guess is lacking is that you can't go the fast sync or you can go the don't do like line so when I've completed this proof of proof of concept I will know whether this whole set of ideas works and I hopefully can talk about it a bit more in details but yeah so my vision changed a bit and I think it's a bit too early for me to present the stuff because I want to see the numbers first but this is where I'm going. Thank you. Yeah, thank you. And to be clear if you're happy with the numbers what type of change would you be proposing? Okay, so if I'm happy with the numbers one of the outcome of the stage four which is the how you can essentially graph to the different types of tree onto the weight balance tree I would want to measure the overhead that you will gonna get so essentially the systems that will be based natively on this weight balance trees will be the most efficient by using the storage but the systems that will use different hashing algorithms will pay some overhead and so I hopefully will be able to tell what kind of level of overhead it's going to be and so if the overhead is let's say reasonably large so then I would suggest to consider using this WBT trees instead of the sparse merkle trees because the main, I understand all these optimizations as you can do in SMT but my main complaint about sparse merkle trees with all these elegance and everything is that in order to maintain the balance of the tree in adversarial settings you have to pre-hash the keys you basically have to randomize the key otherwise if you don't do that your attacker basically will create like a long sibling kind of nodes which will always have a very long merkle proofs by the very long I mean like 256 bits not bits but basically 256 things so if your keys are balanced or randomized then it's not a problem but as we see in like in the current Ethereum we basically do double hashing or triple hashing of everything like for example Solidity is hashing the keys of the mappings and then you get these indices on the storage you get hashed again before they insert it into Patricia tree and then you get another hashing which actually happens over Patricia tree so there's like but these particular things there's a triple hashing so I would like to have less hashing so for again for the performance reasons so but again I will hopefully have numbers later I'm not sure the key hashing is that big a deal given that in any of these tree structures the like you would need to hash like a huge number of times to do the tree updates in any case well the tree hashing is basically it's not only the performance I understand that it's not like a biggest performance hit but at some point I'd had to optimize it away because it was going coming on top of my profiling but one of the things is creating inconvenience because you have to keep the pre-image database and currently let's say in archive node in Ethereum pre-image is about 16 gigs so if you want to like do iteration on it and like this you have to carry this pre-image database with you right but let's kind of put it aside for a moment and we will have more informed discussion when I have the numbers hopefully in a couple of weeks time okay cool the next thing on the agenda is just general V21 discussion there are a lot of things changed and added as we noted are there any questions or comments regarding everything right now also note that because we're using GitHub for this there's a rich conversation going on via the issues in the PRs and would love more input there if you see something that's wrong or an issue like please raise an issue and when there's PRs if it's something that you've been keeping your eye on or you understand or want to get some insight and feedback please don't hesitate to pop in but for now we have time to discuss anything if you want to I have one thing to ask about developing the idea about three hash functions that Vitalik has published I would like to propose to store big structures like validator set in a tree but it seems to be a pretty oboist thing and since nobody has mentioned that before is there any difficulties with that or just it's not a time so we mentioned that we mentioned it in the context of the simple serialize tree hashing that was the spec proposal that I had mentioned I mentioned it here a few minutes ago I think it was a number of days before yeah but as I understand it's this is just a way of making a hash from big structures now is it not about just hashing like do you mean like what kind of sweet structure are you thinking of here I'm thinking about just storing the validator set and the use in a miracle tree and maybe in Patricia miracle tree maybe in sparse miracle tree it doesn't matter much and the use for example the public key is a path in the tree the same structure like account states are stored Part of the reason why we did things this way with indices is because the validator set is potentially large like it could go up to a few hundred megabytes and we want it to be maximally easy to just store the whole thing in RAM so I basically, I would be worried that adding any structure other than just a simple list would lead to huge inefficiencies Will it be okay to store this big structure in RAM? Is it really needed? The entire thing definitely has to be in RAM because literally the entire validator set gets updated every time there's a recalculation so like first of all there's like there's basically there's not that much benefit to a data structure that makes it easier to change small pieces at a time precisely because we're changing everything at once Great, and it's needed in RAM because every time you're pulling an attestation you need to compute a group public key and from the validators and you don't know which validators you're gonna have to have before you get the attestation Okay, good. Then I'm just wondering maybe we'll do it by myself one day just how much, how big will be a footprint of all the stuff that we are going to hold in memory? And anyway, we will have to store all the structures in disk in case of restarts, we need to load them from somewhere So that's why trees seem to be like a good trade-off between storing on disk and accessing and hashing Okay So are you saying that if you can demonstrate that you can store the validators efficiently in a tree such that it doesn't blow up too much to eat up too much RAM, then it might be worth it? I'm not sure, actually, because yeah this is like, this will be like when we are we'll have to dump to disk these validators set from time to time when it's changed and then when the app is starting load it from disk to the RAM, right? So this is what I'm worried about a bit because it's gonna be a huge structure that should be stored in disk in one time like maybe gigabyte or something like that. Well, if you dig into it any more or have more thoughts on it, please share with us. Yeah, sure. Any other V2-1 or I don't even know what version it is anymore any other spec discussion questions, thoughts? So to recap that we have just discussed we prefer to keep everything in memory, right? We need to be able to keep the crystallized state in memory in general. And would any techniques that would make that infeasible would probably not be a road that we wanna go down. That said, you're probably also taking you might be taking snapshots of the state maybe every cycle or so and storing them in your database so that at least since the last finalized state you could probably prune beyond that. But there is you're storing the current in memory but you probably have references to snapshots of it in the database. And you could potentially have multiple crystallized states, right? Right, you could have, you could receive two conflicting blocks that cause a state transition. One of which would be considered the head but the other maybe would be a close tie, I mean a close second. And in that case, you would have two conflicting crystallized states probably locally in your database that you would only prune later once you've finalized which direction the chain would go in. Do we have ideas on how much of the crystallized state updates in every round? There is a maximum amount of validator shuffling that can happen per cycle or every some multiple of cycles which is what Vitalik like 3% or 10% I can't remember. What was the three or 10% of the cycle? It's like 3% of the validator set can shuffle, is that what it is? Or I think more precisely 1 over 32 but same thing. Okay, okay. So that amount of the validator set can change on a roughly per cycle basis and then all of the different the shuffling which is being debated as to whether we should actually keep the shuffling in state. The shuffling is gonna change can change on the order of every cycle and I guess all the other fields are a lot smaller but those can be changing on about every cycle. So eventually the full crystallized state changes every longer period. There's no generational aspect that we can exploit. The validator set because the validator set can only the validator set can only change every I think four cycles and only one 30 second of it can change at a time. So you do have and your validator set is essentially your largest component of the crystallized state likely at any given point. So you do have some generational things there. All the other components are smaller components but those can fully change and will fully change often. Even though it's true that the whole validator data structure will change over a whole cycle. Don't we have bounds on how much can change per slot like 128 for example, in which case would it make sense to try and amortize the costs on a slot by slot basis instead of doing a huge batched operation at the end which could be quite expensive. For the shuffling of the, for bringing validators in and out? No, for example, like updating the balances and you know if someone made an attestation then maybe we can just reward them in the slot in which the attestation was made. So I think the problem with that is that that would basically mean that we change one over 64 of the validators every time and the Merkle tree, basically we would be doing something like six times more hashing than we're doing right now because we would have to kind of update a bunch of Merkle, extra Merkle branches whereas currently we just reconstruct the entire tree. Or currently we do hashing but like currently plus tree hashing we would reconstruct the entire tree. All right, and Jacek to be clear, the bulk of the validators stay stable but actually their balances, almost all validator balances would update every cycle. So unless you pull the balances out into a separate data structure, you miss I think some of your generational aspects there. Yeah, I got it. The devil is in the details. Any other thoughts on this stuff right now? Apologize for the technical difficulties and for my relative absence for the first half of this meeting. Stay tuned in the spec repo, there's a lot going on there because I just finalized this location for our event. I will be sending out details regarding the event, the types of schedule and the proposed working groups shortly. Can I just ask one question if we're getting towards the end of the call? This might be the appropriate moment. A few sessions back, someone brought up a question about aggregation of signatures and especially the question of whether including several times a given signature introduces any kind of security problems. And I would just like to know whether it's a question of taste that one would rather just prefer to have an aggregate signature which represents any given signature only once at most or if there's any, how to say, security reduction or whatever, any kind of security problem with including any given signature multiple times? Part of it's, I guess, a better of taste but it's like if you start admitting that then the set of which keys are included can't be a big field anymore and it has to turn into some weird more complicated data structure and that could reduce efficiency and really increase protocol compliance in other ways. So it might increase, so the data necessary to untangle the aggregate signature but it might also make the process of creating the aggregate signature significantly easier. So I'm just wondering, like is it a hard line no or should we be avoiding this or is there maybe a good compromise to be found? I don't think, Go ahead. I don't think there's any security implications. I mean, one possible compromise would be to replace the bit field with a two bit field where every validator has two bits and their signature could be included at most three times and that would keep the complexity and performance well low and high respectively. Right, and the more bits you allow for a validator the more allowance you have for creating some multiple of complexity for calculating the group signature of the group public key. I don't know, I would prefer to see a reasonable solution that doesn't have multiple aggregates, multiple representations for aggregate but I agree that there might be some compromise there depending on the aggregation strategy offline and maybe some of the real world results around that. Yeah, and I definitely wanna see real world evidence first. Yeah, of what kind? Basically that like this allowing multiple inclusions in a signature would end up substantially increase efficiency or really increase efficiency in some concrete way. Right, I'd like to keep it as an option adding multiple bit per validator in the bit field as an option depending on how it tests that result rather than adding complexity at this point. And I think we have some work still to define what that aggregation strategy is and until we do that again I don't wanna add the complexity and the data structures. Well, we're thinking about aggregation so we'll keep the post it as soon as we have anything we're sharing. Great, and aggregation is one of those topics that I'd like to dig into in person at our prog meetup. So cool, open conversation, open floor, any questions, comments before we end this thing? Thank you. I recorded the call locally and I'm gonna try to get it up soon. Again, my apologies for the technical difficulties. I'll try not to let it happen next time. Question is one final question is do we wanna meet on, I believe it's the 25th of October? It's four days before our in person meetup. Straight between Web3 and DevCon. Do y'all think it would be worthwhile to meet them? Are people available to meet them? Any reaction? I think that one would be nice to meet everyone. I'm gonna be now. You mean the call, you mean, is it the call? Yeah, sorry, I mean the call. I mean, should we? Yeah, I'm happy to do the call so it's just my opinion but I'm happy to do that. So I think we'll be in transit from Berlin to Prague on that day so it might be a little bit tricky for us. We have the studies in Prague at that moment so it might be tricky as well. Okay, I'm gonna do a maybe and talk to some of y'all offline between now and sometime mid next week and decide if we're gonna do a call then. It's, I guess it's kind of the craziest time of the year. It's like Christmas for Ethereum. All right, cool. Let's end it. I appreciate it, I'm gonna make it. Talk to y'all soon. Thank you very much. Thank you. Thank you. Bye-bye. Thank you. Thank you. Bye-bye. Thanks. Thank you. Bye-bye.