 So, there's the agenda, testing and release updates, I will start on that. We, well my focus has been on B012. The last thing really in that, in the queue there is getting the upgraded BLS, including that draft 7 merged into the spec for the new test vectors. I'll be looking at that today. We're trying to get this out the next couple of days. I know it's a little late, but we've been the blocker. There's been a number of things going on, but getting this BLS is the last thing in there. Other than that, there are a number of networking updates and modifications that came out of that networking call. Thank you for everyone for the input and review. And also some increased testing is going to come out. There were some corner cases, especially around handling multiple different operations in blocks that could result in some corner cases with respect to modifying state in the middle of state transitions. So, might catch some new bugs on your end. Cool, so that is eminent. Any other updates on this? Cool. Let's run right into client updates. First we'll have techie. Sorry, Danny. Just thought I'd give you a big plus update if that's all right. Thanks, Heidi, please. It's been a little while. No, no worries. So we actually pushed a blog post last week that details all the stuff that we've been busy with. I'll just push it now on the chat before I forget. And yeah, it's been pretty good. We've made a lot of progress on the structural fuzzing. So we've implemented and derived the arbitrary trait on our two types. So we can now provide well-formed instances of custom types from raw byte buffers, so it's a huge improvement in our fuzzing coverage. So this has already allowed us to identify an integer underflow in an upstream dependency, the snappy crack that we use. So the maintainer confirmed the bug. We pushed the PR, but it's yet to be merged. Over the last few weeks, we've been working with Techu and Nimbus. It's been really great working with these guys. We've raised a bunch of issues. Some of them could have been exploited off the wire. Others are more hardening opportunities. I guess the main ones, there was an infinite loop in Techu when SSZ decoding bit lists without an end of list marker. And on Nimbus, there was a seg fault due to stack overflowing with the process final updates function if I recall correctly. We've updated the trophies list of big and fuzz. We're now up to 18 unique bugs, which is pretty cool. Made some good progress with the goal and integration. So just a quick reminder for everyone. We've been experiencing a lot of issues integrating both serenity and prism. And we actually lined up a call with prism in a few hours to see how we can better collaborate on this one and move forward. Yeah, if you go through the blog post, we're proposing a new architecture for big and fuzz. So it's all outlined in the blog post. Please feel free to give us feedback. Super keen to hear from everyone on this. Basically, we're going to move away from C++ and we implement the AFI bindings in Rust. Don't necessarily want to go into details here, but we're breaking down big and fuzz into three separate tools. Each two fuzz, which will be coverage guided fuzzing, leveraging the structural fuzzing that we've been working on to generate interesting samples. Each two diff that will allow us to replay samples, those samples across all the implementations using the nice utilities that you guys have been building. So PCLI, LCLI, CCLI, and so on. And finally, the AFI bindings. That's the core differential fuzzing part. So that's beacon fuzz V2. There's a nice diagram at the bottom of the blog post if you're interested. Yeah, so please check it out. Super keen to get some feedback from everyone. We'll also be pushing Docker images so that the community can help find bugs. I think it's a lot of people asking how they can contribute to E2. And that might be an interesting experiment. Kudos to Justin Drake for the suggestion. We should be wrapping this up next week. And I guess finally, we've been starting playing with Lodestar. I think we caught a few type errors on the SSZ package, but we're not really asked for the experts. So I might reach out to Cayman next week to discuss this further. Because I guess most likely these are probably caught by the calling packages. And yeah, that's pretty much it. Awesome. Thanks, Maddie. Any questions for Maddie? Yes. There was a discussion three weeks ago about some beacon states that shouldn't be trusted. So it involves everyone. But apparently, well, last time I looked, it was actually states that couldn't happen if we were in an actual functioning client. So what's the latest news on that? Was it been first modified? Yes, so that's the reason why we're splitting things up so that we can avoid this confusion. You probably saw an issue or a PR from Danny. I don't think it ended up being merged to the specs repo. But we're probably going to have to sync off beacon states at some point. So in my opinion, the whole concept of beacon states being trusted inputs, we might not rely on this assumption for too long. In fact, we've found a couple of overflows on Lighthouse when dealing with invalid beacon states, per se. So the spec has actually been clarified, right? I think last week or two weeks ago. Now, if you overflow in a state transition, it's clear that the state transition is invalid, which is good to see. But yes, we've been so the structural fuzzing help us mutate the beacon states better so that we can actually not only have valid SSZ containers, valid beacon state SSZ containers, but also valid beacon states as per the spec. So yeah, it's one of the reasons why we split up the tool set that we have in three separate tools. And yeah, those conversations were super interesting in my opinions. So it raised a lot of interesting thoughts. And yeah, thanks everyone for being involved. And I guess one of the issues that we had as well is hitting the utilities and CLI, for example, was hitting directly the state transition and bypassing all the potential checks that are performed at the networking layer. So we've now considered that and won't be raising such issues if they arise in the future. Hopefully that makes sense. Right, and I guess to clarify, in the syncing, there's probably two ways to sync a network safely once the networks run for more than say three weeks. One is to have a checkpoint for a given epoch and block sync from Genesis and then make sure that the checkpoint you reach is that checkpoint route that you had. That wouldn't involve having to get an untrusted state from somewhere. But if you instead, there's a much better UX around actually just starting from a state. If you have this checkpoint that you want, you wanna actually just start from that state. So at that point, you're actually inputting a state and there's an avenue to input a state into your system. Generally you've got this from say a trusted source, but there's a small likelihood that you've gotten from a various trusted source and you have some sort of like tainted state that you're inputting into your system. And so I think that's the nuance there. Obviously you don't have to necessarily get it from the PTP network like I had in that PR, but the idea of getting a state from somewhere and putting it into your system is certainly a flow that I think we're gonna all want because block sync from Genesis is not, it's not gonna be the best UX once we get a few months into this thing. Yep, it's fine. Cool, any follow up on that or other questions from Eddie? Cool, thank you. Glad to see all the progress there and the trophies on that on your README, very exciting. Okay, other testing items? Let's discuss, let's not separate. Yeah, we're gonna do client updates and then we'll go straight into test nets. Great, so let's get started with Techu. Perfect, this is Jim from Techu. So in the past couple of weeks, we added snappy compression over gossip and RPC. We also added support for ping and get metadata. Now we're randomly subscribing to persistent subnets. We majorly reduced our memory usage while syncing. And lastly, we now have built-in support for syncing Schlesi, that's it. Cool, what's that memory usage look like on, say, Schlesi? Do you know? I think it's now averaging around like 900 megabytes, so not 100% sure that it's been a while since that. Yeah, okay, cool, thanks. Let's start. Hey, so in the past few weeks, we're finally starting to be able to sync. As of today, we're syncing kind of stably on Schlesi. Haven't yet reached the head, but we've synced a few thousand epochs, worked through different gossip subbugs and syncing bugs and what have you. Still not really stable, but hopefully the next few weeks we'll kind of nail down some of that node level stability. It still looks like our DSP5 isn't running. So we are stuck with our bootstrap peers and it's not really sustainable. So different things like that we're still working through. Got it, thanks, Kamen. Nimbus, hi. So like Lordstar, we had multiple sync fixes in the past three weeks, in particular Snappy, which had the compatibility issue with Lighthouses. We now have a single make Schlesi target to connect to Schlesi. Sync is working slowly but steadily, let's say. And right now the main focus for Schlesi and Sync is working on performance in particular on Windows. Besides that, we have worked a lot on multiple memory leaks that were preventing our testnet to last for a week. Somewhere coming from the P2P, some from our block caching system and we added several memory tracking tools. And also since we are entering like, we are focusing on bug fixing, we have now tools to debug on discovery, to debug LEP2P topics and message received. Got it, thanks. And this probably goes without being said, but once we have fast state transitions, it seems like the next big culprit is memory usage. I know everyone's kind of been attacking this from different angles, but be sure not going to each other's doors. I know there's a lot of pretty solid strategies to go off of now, so you don't have to deal with this alone. Trinity. Everyone, not a huge update this week. Mainly we've been continuing our port to the TRIO-ASync framework. We have updates to the latest Speaking Node and Validator APIs, which is good to get into place. We've also made some progress on bringing more full-time contributors to the project, which should just generally help out with everything we have going on. Cool. Thank you, Alex. Another mind. Just minor updates in the last week, so we've updated a bit of the open app specification that they tested synchronizations but had some problems with the Mothra networking. Not really much happening in the last two weeks. Got it. Thank you. Prism. Hey guys, Terrence here. So over the last few weeks, we've been working on just Topaz maintenance. We're fixing UX bugs and then users' feedback as they get reported in the Topaz testnet. Also, fixing network bugs as they get reported in the Multi-Crime Testnet. So nothing really is substantive from that regards, just typical bug fixes. We are also fully aligned to SPAC version 11.2 and working into aligning into version 012 right now. And later has been doing great work on optimizing initial syncing. So this latest experiment resulted in 100 blocks per second during initial syncing. And this is without attestation signature verification. So we still need to optimize signature verification for initial syncing. Basically what the Lighthouse is doing. I've been in shape in doing great work on slashing detection. So Eric Hunter reported to us that one of his validator was earning more money than the rest. It turns out his validator has included a slashing object in the block. So that means that our backhand slashing service is working. That means that PubSub Slashing Network is working. So it's pretty exciting. Let's see, we've been running client production readiness tests in particular stress tests and then the in-activity penalty test. So we updated a few internal matrix to better super monitoring. We're running 16,000 validators one section and then $20 and then no issue on that. So we're starting the in-activity penalty test. We're waiting a few days to see the outcome of that. So yeah, we'll continue to work stress on it. And yeah, that's it. Nice. Are you all running on short slot times on the stress tests? Yeah, yeah, yeah. We are doing one second right now. Oh, nice. Do you know if you're seeing any degree of forking more than, say, on a normal 12-second slot time? Not necessarily, but we do see about 85% participation versus 99% participation. And that's just due to timeout. Timeout from which angle? Sorry. The RPC angle test, then we have about 2,000 validators on one vision node still does some RPC requests within a second. Gotcha. Interesting. Okay, cool. Glad that you all are pushing on those. Thanks, Terence. Lighthouse. That'll be me again. So Paul's been busy implementing hierarchical key derivation for BLS. So we're ensuring interoperability with ETHDO by Jim McDee that's being used by Prism. So he's implemented the key derivation generation, the BLS key store and the BLS wallet. Quad excited to announce that we're kicking off the first external security review with 12 bits on Monday. So we've wrapped up the slashing protection, spent a lot of time handling concurrency and to me city guarantees for database transactions. And we've also changed directory structure to better suit audits in the future. We've been running several 16K validator test nets over the last couple of weeks. We've seen two panics, one from an upstream package or one from our code, both have been fixed. You were asking about memory usage, Danny. We've seen some great improvements there. So we're looking at 300 megabytes of RAM for a node running a big node and a validator client with two K validators, which is quite cool. We've been working to fix some consensus bugs as well that have been identified by Justin Drake. So we've removed almost all parallelization from our state transition code. It's not really needed anymore since we can do batch BLS verification now. We finished implementing full gossip self verification logic, caused a bunch of issues with our tests harnesses. So really looking forward to seeing how it runs on Schlessee. Talking about Schlessee, we've defaulted our Lighthouse to run on the Schlessee testnet by default. So the spec and the Genesis state are now backed into our binary. I guess other missed updates. We moved our disk v5 implementation into standalone Sigma Prime repo and we've been upgrading to stable futures, the entire code base, bumping all the Lighthouse dependencies to the latest versions. And yeah, so we're really almost done with this massive upgrade that age ravine has been busy with and we're hoping that Trail of Fits can tackle Lighthouse with these incorporated. And yeah, we've just been working on the RPC error handling as well and it's been integrated into our reputation system. Cool. Thank you. And the 300 megabytes, that's a 16 K testnet? Correct. Cool. Very good. Thank you. Thank you, thank you. I believe that was all. Let's move on to testnets. We can start with Afri. Yeah, sure. Thank you. I can talk a little bit about multi-client testnets. A lot of stuff happened the last three weeks since our last call. I did multiple attempts to create a multi-client testnet that unfortunately failed, mainly due to network fragmentations, but also due to big nodes disconnecting and rejecting other peers, each other for rate limiting and other reasons. Then we revealed the problem with the kind of edge case that these two timestamp can be less than minimum genesis time. So we had different genesis times calculated in the prison and Lighthouse. However, our client teams are super responsive and I want to emphasize that I really appreciate it. Apparently we managed to launch a multi-client testnet two weeks ago. We talked about it. It's called Schlesi that launched initially with two Lighthouse validators at genesis and two prison validators. In the beginning, the finality was horrible because some clients kept crashing and my validator knows I had a hard time to keep them updated and alive. But then again, as I said, client teams are super helpful and responsive and we're doing an amazing job in fixing bugs. And eventually now Schlesi has a almost perfect finality and liveliness for more than a week after fixing the most important bugs. And I think everyone is surprised how stable this network is running. After a couple of days, Tico joined the testnet since they managed to successfully connect and sync the network first. But now I also know they run validators on Schlesi. So we have three full clients on Schlesi right now. I know that the Nimbus Beacon Chain client is also synchronizing. I still personally experience networking and sync issues but I know the team is very close to fixing it. I didn't manage to get to the chainhead yet but I know it synchronizes and connects. But maybe Protro has more detail because he mentioned earlier today that he managed to do a full sync on Schlesi. I also know that loads are managed to connect and synchronize at least some epochs on Schlesi but I didn't test that out yet. So I'm aware of five clients right now at least was partial, some was full support for Schlesi Testnet. I would be interested in learning more about where Trinity and the Cortex client are regarding interoperability or multi-client testnet efforts. And given the current stability of the Schlesi Testnet I would start working on outlining a coordinated multi-client testnet soon. Something that's based on the main aspect ideally on targeting the version O.12 of the specification with 16K Genesis validators and maybe we can figure out a way to launch the testnet with three different clients at Genesis. So it would be amazing, but that's all up for discussion and also I had the idea maybe if we do a more coordinated and more official multi-client testnet maybe we can also do a dry run test of a deposit contract ceremony on the testnet but that's all to be discussed and I would carefully target maybe a June 2020 launch date but I'm still not very certain about how long it takes to implement the version O.12 of the ESTool spec in all clients but I'm open for discussion here but I think we can start talking about this now. That's it from my side. Yeah, thank you Proto, do you have anything to fill in? All right, so I've been trying and experimenting with Loadstar and Nimbus. There are clients that are relatively new to the special testnet. Loadstar has made some great progress in sustainability and they've been thinking many epochs, like a thousand or so, it's about 10% of the special testnet I think. So stability is getting there and then Nimbus is very close to the head of the chain, about 100 blocks distance. This is where the sync mode changes and I think there's some stability issues if this special sync mode for the last few blocks but it's working and I've implemented support for them to show up on EVE stats. So everyone can follow the testnet work. Cool, thank you. Yeah, I'll put a diff up of what is going into B012 and outline those items so we can get a better estimate on how long it's gonna take. I think the, sorry, someone just messaged me. I think the, there are a number of networking changes. Most of these are very minor. I think the big thing is gonna be the support for BLS. I know that we just got it on Harumi and I'm not sure the state of the Java implementation and the Malacra implementation. Does anybody, anybody looked into those yet? Yeah, we're good. We're pretty much ready. Cux just pushed the latest changes so we should be 0.12 compliant in terms of BLS already. Great. Yeah, planning to update the Java implementation this weekend. I spent too much time messing around with Slacy this week but should be lessing. Awesome. And I know Python is also updated and we will have those test vectors output. So I think a like coordinated start in June is certainly, right at the beginning of June is like for some sort of larger coordinated start will make a lot of sense. And I think we'll even be able to do some smaller test runs in the weeks before but we'll, I guess that'll be more on the client end to digest how much this P012 is gonna take. Cool. Other conversation on test nets, questions, comments, thoughts moving into research updates who wants to get us started? If nobody else will enters. Great. I could go from the E-wasn't perspective. Yes, please. Yeah, we have quite a bit to report because we haven't been reporting this frequently on these calls last, I believe, it was a few months ago. So a week ago we have released E1x64 the first variant of it. It's a write up on E-research but we also have a repo under the E-wasn't org on GitHub which has a much longer spec as well as it has examples written in solidity. Both of these are linked in the write up. So this first variant uses receipts which are generated on the sending shard and need to be submitted on the receiving shard. And the simple examples we have regarding this are tokens, two kinds of tokens. One of them is rep tokens. And with that example, it is possible to accomplish having dye, for example, on all the different shards. Now on the next steps, we are looking into new variants and it's not entirely clear which we're gonna do next but we have two ideas on the plate. One is to look more into yanking and there have been different similar proposals regarding yanking as well as by writing the variant one. If you look closely at the specification in one of the appendices, there is something called rich transactions and we also use that to devise yet another version of yanking and we only realize it retrospectively that it is kind of yanking. So anyway, I think we're gonna either look into yanking next or something based on the eat transfer objects which I believe was mentioned by Casey as part of some phase two ideas earlier on. At this point, I also want to emphasize one of the main reasons for this Ephone X64 was not to move phase two into EVM but rather to have a much smaller scale to experiment with these kinds of designs as well as to engage current DAP developers and give them some kind of a and understanding what sharding could look like and that's why we have those solidity examples and we really want to get feedback from those DAP developers to guide us which design is something which could be useful to them. But eventually we do expect that the more useful designs would mean much larger changes in the EVM where it may not make sense anymore to keep EVM because at least historically, the E1 community has been really reluctant to accept a radical EVM changes and if you would do radical EVM changes, then you already lose the benefit of the access to EVM tooling so you may as well just switch to a different engine such as Wazem and then now maybe just switching over to some updates on the benchmarking work we have been doing. So the benchmarking work we do, I guess this is something we always mention and I think a lot of people just think the Wazem team is working on benchmarking but it's more like an on and off effort and in the past few months, we have been looking at some new engines which seem to be performing, one of them especially, Wazem 3 is performing much better than any of the other engines but it seems to be a bit more complicated than the other interpreters and we have found some edge cases where metering could be challenging on this engine. Also we have looked at another E was incompatible, Wazem engine called SSVM but it doesn't really bring any speed benefits over Revit which was our main engine so far. And I also mentioned FISI a few months ago, probably in February, which is an interpreter written by the Wazem team and I'm happy to report that just today we managed to release the 01, the first version of it which passes a lot of the official tests but it doesn't implement floating points and the reason for this release is because next week or the week after we plan to release 0.2 which has optimizations and we just wanted this first to have like a baseline against the optimizations. And sorry for taking so long but it's just a lot of stuff which happened and now coming into maybe a really interesting part. So as part of the benchmarking we have been looking at all the different, well basically all the different pre-compiles which exist on EVM or on E1 currently because if we would propose like it was in the system we wouldn't wanna keep the pre-compiles. And we have reported this previously that the elliptic curve pre-compiles we got quite good results. So that was with the BN128 or BN254. We got quite a good results with those including pairings but it's required what we call big integer host functions. So those are, you could see those are kind of similar to pre-compiles but those are much more primitive operations than the pre-compiles which exist on E1. So as an example it would be 256 bit addition because wasn't, doesn't have it but we also proposed Montgomery multiplication on 256 bit numbers in this big integer API. And with those we were able to achieve really good speeds on BN128 and in the past month we have been looking into BLS12. And again, we managed to reach speeds very closely to the native speeds. So first we have looked at, and all of this is on interpreters. So first we have looked at just a basic BLS128 implementation in Rust which didn't really produce the speeds we expected or at least we hoped. So just some random numbers here. The, this Rust code combined natively was taking roughly five milliseconds for a two point pattern. And the same compare to WASM was roughly 500 milliseconds. And then we reached out to WASM snark, Jordy and the team behind WASM snark because that has been the optimized WASM implementation we have been using for BN128. And they managed to implement support for BLS12. And with that code, with some more optimizations on the big integer APIs, we have been able to move the speed down from 500 milliseconds to close to 14. And I think with one set of optimizations we were able to get close to eight milliseconds. So that's less than, less than half the, but more than half the speed of native. So I would say this is really good news that even on BLS we could get to use BLS video pre-compiles in WASM. And the last part now is that we were also interested to check if we can replicate these findings on EVM. So in the past, I think three weeks we have been working on a small project called EVM 384 which we hope to release for tomorrow's all core devs. And under tests we have added three upcodes to the EVM which are modular addition, modular subtraction and Montgomery multiplication. All these working on 384 bit numbers. And we have implemented just one building block of the pairing operation and made a more like a synthetic benchmark out of that to approximate the actual implementation because obviously into two, three weeks time we don't have the capability to implement BLS 12 on EVM. But with this synthetic implementation we got pretty close to the WASM numbers. So in the end what that means we may be able to even get rid of the the BLS 12 pre-compiles for each one. Potentially it seems to be possible to replicate that with just three primitives. I think that's all and sorry it took so long but hope you found it interesting. Yeah, good, thank you. And if you're interested in some of the stuff and you haven't read the X64 or the recent X64 posts check it out. Any follow up or questions for AXIC? Other research updates? I know I got a handful of people that might want to talk. Vitalik? Looking into homomorphic encryption things more. One thing that we discovered is that there's a use case for it in private information retrieval so we'll probably ask around more things about that producer. And I'm also a published A.I. post this morning basically kind of open calling cryptographers to see if they can solve our polynomial commitment problems. Not too much else in terms of researchy things. I guess spec side also looking into some phase one simplifications. On the proof of custody side. Oh, right. If you have been following proof of custody stuff check out, Donkert has a new post on A3search that proposes removing the bit from the actual signature which is pretty cool. Yeah, and I have a kind of mean thing that I haven't posted but the follow up which basically has kind of reduced the frequency of key revealing so you don't have to worry about kind of keeping track of reveal lateness and it's just like if you don't reveal in time then it's invalid. Yeah, so like between those two things it seems like we can cut with the complexity of the proof of custody by more than half maybe two, three quarters. Yeah, which is awesome. Yeah, the thing where it turns out that we got a bit unlucky is we went down the summer rabbit hole of trying to get a self verifying proof of custody based on the key commitments and it's, I mean, there's kind of, there's efficiency and uncertainty is it would also bind us to using key commitments for block verification. So there's challenges in going down that path that don't exist if we just go down this kind of 0.01 bit approach. Yeah, cool. Yeah, check out that post. It's pretty interesting, very simple but might reduce complexity by a lot. Okay, other research updates? Do one for TXRX. Great. For ETH1, ETH2 merge research, Mikhail has just released an ETH research post about that I think just shortly after the call last time. Oh no, actually probably about a week ago he released it and he started working on a draft ETH1, ETH2 communication protocol to, and he's kind of like working on a POC for phase one as well. For the network monitor, we found that a lighthouse was sending unsolicited UDP packets and so we opened a PR for that regarding fork choice tests. We kind of, we've been generating tests. Alex has built this kind of transpiler for the tests and we found a bug already in Techoo for the proto array implementation. And around the fork choice tests made improvements to Onatoll which is the PySpec transpiler. So now it can translate phase one spec and found three bugs or three, we go in three PRs regarding a PySpec for phase one based on the results from that. And then implemented Gossip 1.1 on JVM Limp V2P. Fantastic, those are all awesome things. Thank you. Other research updates? Gio, many success in making an RPC consensus engine for Geth? Yeah, I mean, there's a PR in the works. We're still kind of discussing the documentation, sorry, the list of RPC calls with Peter. We had the meeting this morning. And I think we just shared with you the document. So I was going to ask for your input after this call. But overall, yeah, I would say the skeleton is already there but we still need to, yeah, I don't know, a couple of details. Great, great. Cool, thank you, that's exciting. Other items, other research items, anything before we move on? Cool, next up is networking. We did have a call about eight days ago. It's a really good conversation and items came out of that. And we've also, this has kind of been ongoing, debugging and interfacing with networking. But are there any other updates? Anything on your mind, Felix, or otherwise? Hey, sorry, I couldn't really unmute in time. So sorry for not participating in the networking call last time. So on my side, I'm still working on the new, on the sort of like spec updates to the DSP5 spec which will improve the current performance a little bit and also resolve this one error message that I guess if you've been running it on a testnet, you're also seeing it basically sometimes you can get packets which are seemingly, which have seemingly wrong encoding, but actually it's just a spec bug. So I'm still working on it and I will publish that very soon. I'm actually not sure how to, basically I'll have something for feedback next couple of days. And then so would be kind of nice to get into a bit of a conversation with like all of the implementation teams to figure out like what is gonna be the path of least resistance to upgrading because the discovery upgrading can be kind of complicated. So we have to figure out if we actually wanted to try to do some soft update or just basically live with like half broken discovery for a couple of weeks until everyone has the right version or if we're right. So complicated with respect to live nets. Yeah, so if the network is live, it can be complicated because they will basically be, if there's a mismatch in the versions and the versions are fundamentally incompatible then basically there's no way to do a clean upgrade because basically just means nodes won't find each other. So due to the V012 spec update with the BLS updates coming, it might just be best to wrap this in the same update so that, because we're gonna have to restart the nets anyway. Okay, that sounds very good. So I mean, depends on like, I mean you were talking about like tentative, like June timeline for the next testnet. So maybe we can just make it so the updates go in then and then we just launch a new testnet with a new version or something like that would be great. I mean, how long do you expect these changes to need to be able to go back and forth and make it into the spec? So I guess it's gonna take approximately like, I don't know, one or more week at least to like kind of get the actual spec done and then I can assist people with the implementation. It's not gonna be like, like there's gonna be one bigger change to the packet format, but I'm always basically gonna be fine. It's gonna be all minor changes. Cool. So what I would say is, reach out early to implementers to get feedback and input so we can just kind of streamline this. But if we can get it down on that timeline then I think we can avoid the headache of upgrading these nets in life. Yeah, okay. Cool. Thank you. Any questions for Felix? Great. Other networking items? Okay. General spec discussion. I have seen those phase one PRs with the bugs. I certainly, the testing on phase one is minimal, we're currently. So I really appreciate those bugs, but I've been prioritizing to be zero 12, but I'll get to those soon. Other spec items, I imagine this one's been pretty quiet for the past six months, the spec item discussion, but as we move into phase one implementations, I'm sure it'll get a little more lively. I have a question regarding the fork choice tests. Will we have them in 0.12 or 0.5, 0.1? Right. So Joseph, what is the state of the fork choice tests that you all have been working on? Are these generated off of the PI spec or are these generated off of the old harmony implementation? And what's the format that they're outputting in? And given that format, is it something that we can integrate into the canonical vectors for the next release? All right. Alex, who's likely on the call to speak to it, but essentially what it does is it reads the PI spec and then transpiles it to generate the tests. But Alex, are you on? I don't think he is. I'll knock on y'all's door and see if these tests can appropriately just be shoved into the next release because if they're already working for you, they can work for others. Definitely. I think that was his goal though, was generally being able to just automated test generation for all the different clients. Cool. Okay, I'm gonna knock on his door. Thanks, Mawin. Thanks, Joseph. Okay, any other items here? Hi. So about the BOS shop before this call, I just made a PR to fix the phase one, phase one, the zero signature issue I posed here and I hope that people who are interested can take a look like in 24 hours so we can generate the BOS test soon. Thank you. Yeah, I will certainly take a look and others can as well. Thank you. Okay, open discussion, anything on anyone's mind. Awesome work. There's like a ton of moving parts right now and it's super, super exciting. Thank you everyone and I will talk to you all soon. Thank you. Thanks, Lyle. Thanks, Lyle. Bye. Thanks everyone, bye. Thanks everyone. Thank you. Thank you.