 Yeah, I was testing a different audio setup and then my zoom kept crashing so I have to just call it today. But thank you. Cool. Welcome call 61. I'm going to talk about some metrics naming that might be useful. Research updates. If you, you're probably aware, there's a number of other things going on. Specifically, the Ryan is in calls. Call yesterday. Thank you. There's also a fort nightly merge specific call. So, although we can certainly answer a couple of questions here if you want to dig in any of that. I think it'd be best to use the appropriate channels and the appropriate calls to dig deep into that. I think the focus here will likely be on on out there. I, on the client updates, I'd love to hear just general progress on out there. Specifically, just kind of the feel on where we're out on engineering. Are there any huge blockers any any skeletons in the closet as we've been kind of modularizing components and stuff so to help us understand that. So, that said, let's start with client updates, and we can go with prison. Hey guys, Terrence here from Prismatic Labs. So let's, so let's chat about a tear first. So from a tear front. It's a little bit slower, but we're making progress. So the latest progress is that we implemented a tear beacon state with the with the replacement and the additional fields. So, we're doing some testing around the state to make sure that's implemented correctly. And I'm almost done with the processing sync on committee. And so the next in my to do this is to work on the accounting reform. So that's us for a tear. We're pretty confident that once we have the foundation for the half working logic done, then the rest can move pretty quickly. And in terms of the marriage we release a demo for prison and catalyst. So I hope that people have tried that. So the current work front, we did some optimization for the slasher DB schema for more efficient storage. We started working on with subjectivity sync so people can pass the state or URL slash block route to the COI and and the noticing sync from there. And we also fix a few bugs on the pier story in front. So yeah, that's it. Thank you. Thanks. Everybody. It is I this week. Yeah, so the last little while has been pretty much all about out there progress is good. So we've updated to the alpha dot three reference tests and almost everything is passing. As I understand it, there are a couple of sanity checks which are failing. So there's some insanity going on somewhere, and we are failing to decode the SSZ in one of the fork choice tests which is weird because we pass all the other SSZ tests. I just need to investigate what's going on there, but otherwise I think we're in pretty good shape on Altair. Other than that, we've been making optimizations for the workload on Prata, mostly improvements to peer scoring, updating to the latest blast library, and doing a bit more batch verification of signatures in attestations. And that's pretty much it except to say, like everybody else, we're hiring. Thank you. I have a question I request when y'all do figure out the SSZ decoding issue. If that is in fact, some sort of corner case that we're not covering in the SSZ test, let us know so we can get a test specific for that in the, in the right place rather than kind of implicit on the fork choice test. Yeah, for sure. Okay, let's start. Hey everybody. So, real quick, this, we kind of new release 0.19 on, on Tuesday. It's pretty chunky. And we're now supporting the latest LTS of node 14. We've added queuing to our gossip validation engine. And we also have threaded BLS verification now so all these things kind of combined. Much more stable node running. As far as Altair goes, we have done a lot of the preliminary work for for Altair just and that we have support for different data structures. Updating the database. We also have a naive implementation of the alpha three state transition, and it's passing spec tests. So I think the upcoming things are making something that's more that matches our fast state transition implementation. And also implementing the various network updates, which we have not yet tackled. Did you all integrate the fork choice test yet? Oh, we have not. We'll add that to our list. And the other clients as we go if we, you can let me know if you'll integrated those. Great. Thank you. Let's move on to lighthouse. We're also passing the alpha three consensus tests. Danny, what was that? I missed what you said just then. fork choice test now in the test factors. And so I'm just curious, especially because like you run into some decoding issues. I'm curious whether some integrated those yet. No, so we haven't done the four choice ones just the consensus one so far. So all by that. We're also working on outside networking with one and age. We have our doppelganger implementation in for review that's the protection against running multiple DCs. We've been doing a lot with the memory allocated still we've got kind of three to six times memory savings. They kind of in review and we're still squeezing a bit more out of it. We're working on a UI development. So it's in working on screenshots now and designs and it should be starting coding at the end of the month. And we're also working on preparing for the merge test that and rayon rain isn't. That's it for us. And members. Hi. So regarding the entire hard fork. For now we have the low impact and preparatory changes merged still evaluating the modernization of the code base, for example, the beacon state. The main things that we need to solve is that currently we assume that we have a kind of one fork at a time design, but when we replay all states around the transition we need to be able to handle it properly. Otherwise, we improved performance of Nimbus. So we had some bottleneck real related to planning. And we also added attestations batching. And this improved performance on Raspberry Pi to be able to handle the increased load on a pyramid and hopefully a Prater as well. Regarding Prater, we also added a Prater page to our Nimbus book. And we also merged a long standing feature requests fall back if one provider so that you can point to get and also infer in case your get instance fails so you don't have any issue with producing blocks. Otherwise, we also finished or HTTP server work because in name we didn't have any HTTP secure HTTP server that was working with or stack. And so this means that in the future you won't need to add the insecure option to have metrics. And this also means that the rest API is almost finished. It's just pending review. And you won't need to use JSON RPC which was used as a stop gap. And lastly on the DevOps front, we will be migrating or flipped away from AWS to save on costs. For now we are migrating only one node but it's possible that in the coming weeks, we migrate some more and we have some downtime in between. But I have a question for those that have done the doppelganger protection, especially after the Prater lunch with Nimbus. Did you all decide is there a workaround for genesis that you have integrated or is that just still a case where people are offline through a couple of given that it happens once every six months is we didn't work on the workaround yet. And Paul, does y'all do anything with that? I think our plan is to just not enable it that genesis. Right, like a flag. Yeah, that's right. Okay, thank you. On. I know everyone's kind of working on different ways to modular as code base handle these these fork data structures and work logic. Any particular blockers or issues people run into that want advice or information sharing here. Ask away. Yeah, cool. Moving on to Altair, as you all saw Alpha three was launched. I actually was, I realized I did something embarrassing but then prototype is not too embarrassing but I realized I started at Alpha one instead of Alpha zero and actually confused myself I was releasing Alpha three and was like wait we haven't done four of these. I had to move off by one error in my head. But that is out. That is really close to honing in on a final version. I think there's a couple of cleanups there's like additional testing being done, but nothing substantive that is in dev currently, and the plan would be to get people to get thumbs up from engineering teams that implementations are done, and also obviously a feedback that you might have. So we can hone in on on finalizing that we are at kind of the beginning of April, we had discussed doing this release to main net in June. That's beginning to be maybe not aggressive but the optimistic timeline. It's definitely a shoot for June, July. But I have a feeling based off where people are at that we're not quite ready to talk about timelines. Does anybody have strong feelings about earliest timelines currently that they want to share. We need to like two months is lead time for audits. So, that's a hard constraint. If you are looking to do audits I would schedule them right now, because the way that my understanding of the current auditing industry is that people are incredibly busy and getting timelines even within the next three months might not be realistic. Yeah, I can second that. Any other comments on outside planning. I figure in the next couple weeks we'll have a much better visibility of this. So let's keep digging in and communicating quite a bit as things are ironed out to begin to set some target dates. So Leo and Perry, or have been discussing some standardization of some corpus, Prometheus metrics that might help in various ways. Perry Leo do you want to talk about that. Yes. So yes, what the idea is to try to standardize some of the metrics and the plan is to start just with a few of them. Say about a dozen of them. We have prepared a document that I just just shared the link on the chat in which we have two sets. One of them we call the minimal set. It's about a dozen metrics that we think are interesting to look at in particular in the context of the browser test set. And so we provide the list of the metric the description the reasoning. We look at four of the five clients and I think at least for the first batch of metrics, all of them already implemented them. So the idea is that they just have different names and we are not 100% sure that this is what it really is. So the idea is to really start with just these few ones and start on that into a way so that we can monitor it and make dashboards and really see what's going on. So what it would be really great if is if you know the different plant teams can either choose select one person that can help us with this process. We promise to take as least as time as possible from you guys we know that you're very busy. Yeah, if if we could try to set some call in which we can discuss for example, which of these metrics are the most relevant and if the numbers that we, sorry, the metrics that we got here are correct or not. And and and discuss on that that would be great. So, yeah, we will try to set up meeting within the next few weeks to discuss this metrics, and it will be great if the client teams can select one person that can join this meeting and let us know which which person is that party did want to add something else. No, that's it. Thank you. Right. Right. Any questions. What I want to reiterate is that for client ease of client fluidity, I think we did a good job with the validator interchange database but one thing that I think is locking people in and here even more is their monitoring setups. So a lot of work to get things set up and monitored properly and then don't want to do that again. And this could potentially enable some some better fluidity there. Leo and Perry what were the particular use cases that are driving this effort on your end that primarily monitoring. I mean, obviously it's monitoring but we all attempting to do here. Yeah, so I think yeah that's that's what we discussed with party several use cases in the context of the product or not I think in party has very clear use cases. Do you want to mention those party. Yeah, sure. So, it pretty much started with the Prata setup. I wanted to create a bunch of dashboards to monitor how the testnet was going but quickly then into extensive amount of having to look at three or four different applications to figure out the exact metric names. So I looked at what pain points were there and what metrics were relevant for me to know that stuff is working perfectly fine. And I just sort of spoke with Leo discuss them a few was also on the same page and then we listed them down. I think that the each 2.0 API repo has a view on release. So I assume that it can be targeted as a standard or some common monitoring across client, which we think is that I mean, I'm adding it on the chat. The API is are different than these monitoring metrics, right there, they're fairly independent. There actually exists a metrics document which tried to standardize this like back maybe a year ago. Maybe that. Yeah, growing old. Sorry. I think that's it's done. On the lighthouse side we've also been quite interested in standardizing metrics to enable client fluidity. One of the approaches that we've looked at taking is there's a bunch of community members I guess that have made some pretty cool dashboards. I forget their names now but there's some some quite popular ones. Typically they work with prism. So the approach we've been taking is trying to get these dashboards and then and then use that set of metrics to indicate that people you know want to use it and then try and focus on those first. So that's that's kind of the route we've been taking we had someone on that full time but they got distracted but I think they'll get back on to that full time so my house is keen to standardize there. So it would be possible for you to put us in touch with this person so we could at least get the information about the dashboard and then correlate that with what we already have in a metric set a minimal metric set. Yeah, so something I found with metrics is that it varies a lot depending on what you want to do so if you're like sitting there and you've got like no two main net validators there's a certain amount of things you want to do if you're trying to monitor the health there's the things you want to do and if you're someone like me there's a whole other set of things you want to do so. Yeah, it is. That's just one thing that I found, but I think there's the lighthouse has a UX channel people share some of those in there I think see monkey 82. It's someone that's made a good one. I can also see y'all dark as many as tried to convert it over to lighthouse. Perhaps the prison discord just reaching out there and seeing what people are using is a good idea. And we noticed that actually, most of the clients that already have a number of metrics that are already common to all of them so I think the best way to start this to, you know, to decide this common intersection between all the clients or that is most of the and start from that point. So I think in the document we have a small table where you can have other contact person for each client. So if you can, you can go ahead and feel it. Then we can contact this person and try to set a common call. And so we can all discuss about what are the most common or like, what are the metrics that make sense to most of the clients. Yeah, we have probably 1000 or something so there's this quite a few but can't help out. Any other comments or questions on this morning with one. Thank you, Perry and Leo. There's any research updates would like to share today. Yep, I have. Good. No, okay. Okay, I'll go. I'm happy to share that the new fortress spec is ready. I've been in the works for a few months so pretty excited to share that it's PR 2292 on the specs people. Please do check it out and leave comments and feedbacks and if there's something specific you want to talk about you can ping me or Danny. The key of changes for this PR is that the block three structure is changed. And the way that latest messages are counted for has changed during course execution. And the happy news here is that there's no change to the network structures so latest messages, what's remain the same, the way latest messages are structured remain the same. Overall, this has good security and good performance. And yeah, we had a few setbacks in the research that was the cause for the delay but it seems like this is a really good fix that we ever I bet. Those that new for choice imply that we need to have two for choice implementation or can we use it to replace the world change from Genesis. I think it should work. So basically as long as that specific attack has not happened, both for choices are going to give the same result. And as far as we know nothing like that happens since Genesis so that should be good. I mean, I want to finalize right. So, only if we had a non finalized chain that would matter anyway, both are rooted in finality. And so it, it could you could imagine it and causing a minor reorg in that stretch after finality, if you switched it but that's fun. Okay, but like once you have switched and you have finality, there's no need to remember the old fortress rule. Yeah, exactly. It's written as such it isn't a change to the phase zero to kind of the base for choice. Like I guess we can, we can make it so that it can only get activated on finality or something like that. And then you at least should never have to use both for choice rules at once. But even then in 99.99% of cases. If you just switch the four choices, it's going to give you the same exact answer as the old one. Right, right. Yeah, I guess true, you can just just make it a rule that everyone implements the new folk choice at a certain time of the day and then that should also work. It would probably be fine if some of the nodes have upgraded to the new one and some of them are operating on the old one, but obviously that's, yeah, might get tricky. Yeah, if we said that we were going to change at some point in the day then that would mean that we at some point the clients need to have both of them right. That's exactly what I thought Mami's question went. I think, yeah, to make the transition, there would have to be those two implementations living in the client at the same time, at least until we are done with the transition. Not even worth that complexity, because if anything you like if there were this type of attack you might have a disagreement for some short amount of time but even justification and finality still. Right. So removing things from your folk choice so things can still move on. Right, you're just adding so I guess like, we can probably easily show that there's no safety issue with us with having both at the same time right. And then I guess yeah Mami is right like we can just make it an upgrade. Yeah, I think that we can just get. Just get all the clients to relation the same week or something it sounds like it would be reasonable. That's what I would argue. Yes, but we can we can just. We can think about a little bit more but I that's my understanding is that we can just, it's safe to roll it out like that. And something that we are looking for specifically as you look at it is just kind of like sanity check that engineering complexity and that of this change is not massive. That it, you can generally kind of use some of your same structures and algorithms and slightly modified way. Wish we believe this case. Anything else on for choice for him move on. Just to be clear it's not for out there right. We are not currently planning on releasing it out there. Again, it is a, it's a modification of the phaser rule. And so once we do have it. And we're considering it for merge, we can have the conversation, we can read up this conversation maybe offline about how we want to coordinate this. Okay, other research updates. I was just going to give some updates on the marriage. First of all, we are changing the terminology a bit. So we now speak execution layer instead of application layer execution engine execution payload execution blog and so forth. There is a correspondent PR in the stack repository. Once this PR will be merged. I'm about to make a couple of a couple of cleanups more in a separate PR, and then it will make sense to start working on making this back executable which has been already started by cell way. Thanks a lot for that. So. So, there is a spec for ranism, which is focused on the former proof of work clients and how to turn them into the execution engine. And yep, it's like almost complete so need to do also some fixes in that. So that's probably it for the merge. Also, we have a very implementers call next week. If you want to discuss some particular technical detail regarding the merge. Just reach me out. I'll add it to agenda. That's all for me. Great. And can somebody drop that Ryan is in spec into the chat I think it's a really good document, especially to get people up to speed on execution inside. Yeah, I'll just drop it. Thanks. Okay, anything else on the research side. Especially on Altair anything that's coming up that people want to discuss have questions about, and any items in general people like to bring up on this call. So besides the merge call next week will also do more regular calls for ranism. So if you're interested to stay in the loop, you can attend these kind of office hour calls. These are optional. You can hop in and out on the discord in the R&D discord. And yeah, you're welcome to, to help if the early merge work. Great. Yeah, less formal more just kind of catching up asking questions you should. Okay, and anything else people have to share or discuss today. Awesome. Again, sorry about all the technical difficulties earlier, we'll get this up on YouTube soon. Thanks for being with me. Talk to you all very soon. Bye everybody. Thank you. Bye. Thank you. Bye bye. Bye.