 And today I'm going to talk about what I've been working on the past four months or so. This project actually started way before the EPF. I've been working on Ethereum in a different research branch for nearly a year and a half. And as a result of that research, I got an idea of our further implementation, further work that can be done as a result of that first phase of the research. So really today what I'm going to be talking about is this new feature that allows the validator to update their signing key. And I'll explain why it's that important. And there's a lot of security questions still unanswered and I'll go through that as well. And it was also what it means generally on security for Ethereum blockchain. So really the goal of the project was to understand the feasibility of this new active feature and see how easy it is and understand whether there's any implications to having such a feature deployed in the near future. Because there's so many moving parts and the keys are being used so many different places, it's quite crucial to understand that changing one area does not necessarily break another moving part essentially. So that was the second objective of the project is to try to do a simple proof of concept. And for this I used a PySpec written in Python to just do a very quick implementation to understand, for me to understand what's happening and also to test whether it's achievable. And really the purpose is to reduce any opportunity for an attacker to basically leverage the slashing mechanism as a method to extort money from victim essentially. So in our talk model obviously we kind of assume that in our talk model obviously being able to prove firstly that they actually have compromised the key. Because if I was just approaching the answer, hey by the way give me $200 because I'm somehow able to compromise the sign key normally believe it. So the model relies on the proof of ownership of that compromised sign key. And then one star is compromised at this point. So once that's been compromised the attackers basically go like three possible outcomes. So that's when the extortion games really begins. And that's when you've got this negotiation about what's happening between the attacker and the victim. And really based on you know once the ownership is proven then there's only these three things that can happen really. Firstly is the target could simply just sign to exit which we wouldn't really because there's no end game for them to do that. And secondly is sorry the first is the from the victims we're talking about from the widest perspective sorry. So from the widest perspective go three possible outcomes. So firstly is as soon as you realize is that the compromise just basically exit you know or it would be refused and get slashed. And as soon as you get slashed then obviously you're on an exit game for 36 days approximately. And within those 36 days you still you still you still get penalties right and you'll still be slashed. So if I put your key and I keep on signing you'll still be slashed. So you still keep on losing money essentially. Or the last one is pay the ransom. And the last part is also very tricky because which I'll discuss a bit later because there's no way of stopping their target to come back and basically do more money. So that's a big problem and there needs to be aware where if the victim has paid the ransom then they're allowed to successfully exit without any penalty for the penalties. So that's very crucial in the model. So really the key benefit here of having the key update feature is that the compromise validator does not need to exit the victim chain essentially because you know I'm from a business perspective because validator is all business. The longer you are on an exit queue and you're not performing your duties which means it's obviously not great. So having this feature will just basically allow validators to just change their key without having to exit. And also it increases the availability of validators because now if a validator's key is compromised they don't need to exit. Which means it increases the integrity then and availability and also the overall security of Ethereum blockchain essentially because now you always have validators who are going to be there and doing their job rather than having to exit because the keys are compromised. And lastly the any period in activity we're looking at opportunity loss. So any loss and potential reward whilst performing the validator's changes. So there are obviously many of the benefits that this is a high level summary of some of the important key benefits. So really how it works is as soon as the validator has been compromised then really there's going to be two conditions. So the two conditions here is basically is that the... So what are the conditions for this to work? Firstly it needs to be something unique. The key is to be unique. You can't use the current signing key because I believe for voluntary exits you could basically use a public key to sign a validator to exit. But you can't really use it because we're assuming that hackers already got access to that key. So we need something unique that has not been previously used and the only thing is the validator withdraws. Assuming that's been still safe offline and it hasn't been accessed. So those are the conditions of how this key would work. We're going to have to use something unique that has not been compromised. So really the goal as I previously mentioned was to try to get inside of the developer process, try to understand the consensus and try to do some simple implementation of this feature. Now this is very, very basic proof of concept while obviously there's a lot of other security concerns that industry addressed which I will go into later slides. But really I had to modify the current Pi spec and add new functions. This is just a few of the important ones I had to add that would allow this update to happen. There's a lot of other definitions that were included in the various parts to basically facilitate this change. And I've got a link here where I've been pushing some updates. It needs to be tidy up a bit because it's a little bit messy but hopefully in the next week or so there should be a nice update with a nice working feature with the test case as well. So the last part is the test case to just basically check that the key has been changed. So basically checking the old key with the new key and just looking to show them all the same. It's a very simple test case but again the purpose is to just try something quick and fail and learn from it. So challenges were just there's more, there's a lot more challenges than what I've put here but some of it is obviously the ability to update the smart contract. So I'll speak into the primary area and we've got the BLS keys and we've got the one key, the trust keys. The withdrawal keys are embedded in the smart contract so instead of a validator it's a smart contract and it's all captured. So it's the case of then how do we make sure that the new key is linked to the original withdrawal key essentially. So that's an interesting challenge that we need to look at and explore that a little bit more detail. And then the last part is even a bigger challenge because the key, the public key is used in a lot of different areas. So we need to make sure that the enrollment of the real mental of the effective validators it doesn't break the other parts basically in the ecosystem. And so just to summarize some of the unanswered questions this is just some of them. There's a lot of more which you know just having discussions kind of found out what it looks to this list. So it's great to get ideas and make this list be longer because it just means that there's a lot of other things to think about for this project. And in the final report I'm hoping to answer some of these questions. So firstly what happens in a particular epoch for example our key has been changed then at the same epoch or say for example exit is initiated as well. Then what happens? What are the implications of that? The other one is penalties you know I made a particular epoch and I said I want to update my key and then a later epoch you find out actually slashing has happened. So there needs to be a way of having that history to say this has changed but your old key was found to do something that this is something that it is. So you will be charged with that penalty. So that's going to be a lot more challenge. And also then we have the other security implications as a second pool. Now second pool I think of kind of speaking to Manurelia and I'm kind of happy with that element in the sense that low-staking pools they will set up their own validators. So you basically create a smart contract to give them the eats and then they will manage the keys for you. So in that sense it's actually better because the issue is that if it's a delegated then what if the client changes the key because they'll have to withdraw. So that's an issue in the second pool. So I need to carry out further tests to obviously test the future. And the other one is the validator cache on the client side which I had to look into a little more detail and implication of the sync committee. Now this was just something that was raised last week to me and I didn't even think about this. And it's a very interesting problem because sync committees are 27 hours long. So then what do we do? Do we just basically inactivate attacker with a sync committee to not be able to change the key for 27 hours? Or yeah, what solutions can basically employ here to me to get that? So yeah, very, very interesting questions. That's a security question that needs to be answered before this issue would even come to close to being applied. And yeah, it's a very exciting word. So that kind of concludes my presentation and I'm obviously happy to take questions. And if you feel like you have any other issues I could think about, please let me know. I'm just curious to basically help for the discussion on that. So yeah, thank you for your time and for listening and taking up your technical questions. One comment is very presentation. I think by just one thing that might make your life easier is that for the public state kitchen, we're moving into what's more like inputical in the policy measure. So therefore it's kind of similar to we draw. We don't need the policy contract anymore so they might make your life a little bit easier. Yeah, I think we draw, I think it was just I'm going to chat earlier and if they withdraw window could be reduced to like a couple of hours, then you don't really need to have this implementation. But the problem is we draw it can also, it's also takes longer, which means that the longer your key is being compromised, then you'll still be, you still, this is still going to be an issue basically. So until the window is completely narrowed down to a couple of hours, then yes, then then you can just exit basically and then you could just reset the validator. But the fact that because it's a queue and there's about prolonged wait, which means for a validator, for any business who's running the validator, it's a problem basically. Any other comments, questions online? Yeah, exactly. Exactly. Exactly. Yeah. Well, we draw, even if the withdrawals are implemented with the Shanghai upgrade, the things because in order to have a stability, they obviously have to have this key system. And the key system could basically mean that you could be in the queue. Before you get to funds, it could be like a long waiting list basically. And that's the window where the attacker would play that game to extract ransom from you. And unless the window is shortened, where it's a couple of hours, then it reduces the attack very very significantly. So a couple of hours is very difficult for you might get slashed once, but that's literally because it's mostly just like an exit queue. Why, but the problem is the tech slash you could be executed, but you could still be penalized. What's being here in the last issue. So yeah, thank you so much. All right, everybody. I'm Echo and I'm here to talk to you about data availability sampling and creating simulations within local environments to try to figure out some possible solutions to passing around this new set of data that is going to be needed whenever a security comes around. So yeah. What is data availability sampling data availability sampling is basically this process of an individual being able to ask another individual whether whether data is put on chain or not without the person asking the question having to download all of the data in order for them to get all of the data is available. And so why do we need it. We need it for blog data or blogs that layer to scaling solutions are able to prove to prove to individuals that they have made some off chain computation and that they're not lying. This blog data allows people to with fraud proofs verify that people aren't lying and with ZK snark where to things zero knowledge proofs they're able to reconstruct information that needs to be controlled. Okay, so how do we make this happen. That is the question that is the dust networking problem. Yeah, and we need to have peer to peer networks that are fast that are privacy preserving and are super resistant. And I basically just kind of stumbled upon the dust networking problem and heard from Dan grad about a possible solution and just kind of picked it up and ran with it but it's the possible solution is to create a secure DHT on top of this be five over the network. So yeah, what the heck does that mean. Took me a lot to figure that out. Yeah, so basically a secure to dimly a DHT overlay network is kind of like this this these two sub protocols built on top of just be five. Using just be vives talk request or response functionality. And you're able to in the best case scenario have a sub network the main DOS network, pass around information. Any individual can really just kind of go in and pass around samples and other information needed. And if that network were to come under attack, because distributed hash tables are really easy to attack. There's a secondary network that only validators are allowed to participate in sharing information and just kind of has to rely on the validators passing around information to get data availability sampling stuff. And the cool thing about the secure overlay network is that validators are just incentivized to make sure things are running smoothly, or at least a lot more incentivized than a random anonymous person. And a few things to note. Yeah, each each of these the overlay networks and just be five. They all have different routing tables with a secure and overlay protocols will have similar stuff, but but yeah, they're they're all their own DHTs and yeah. So, what I've been doing over the past few months is kind of trying to create a simulation to to see what it is or see how the the overlay networks. How they kind of like dance with each other. And I originally tried to create this secondary overlay rally table within Eric and Timothy's. It's like a pretty complex rust simulation that kind of just tinkers with data availability sampling things mostly related to querying samples. But yeah so I tried to implement the secure overlay network within their repository and realize that I was new to networking in general and new to rust and needed to chill out and take like an intermediate step so I built that's playground. And it's just this a simple simulation that instantiates these protocol strokes between nodes. And so yeah, I've gotten the discovery protocol and both overlay subnetwork protocols instantiated and message processing and just like simple communication between these local peers. And the next thing to do is to put my big boy pants on and and make this make this secondary overlay network happen in Eric and Timothy's or forked version of Eric into these. Yeah, and so yeah, the with the idea of being to measure performance on how these two networks communicate and so yeah, that's that's it for now. But I want to thank Mario and Josh, first of all for letting me like let this happen and making this happen for me. And I wanted to thank Clara, who was like my first friend in crypto, whenever I was trying to build like a light client bridge, Python, she was the first person that was like hey, I can help you out. Kamen came along, and he became my first in real life friend through the theory of morals. And then Eric kind of was a bridge between Kamen and Timothy, and he's been very kind. And Timothy was just like leading this. This does prototype simulation and was huge and being able to answer all sorts of questions like dumb questions that I had turning into better questions as time on. But yeah, just a lot of patience and love and appreciate them all. And thanks for everything. Thanks for everything, guys. Any questions? Any plans to apply these on top of the AI way for the blockchain work? I think that would be pretty cool. Ah, I hadn't thought about that. Yeah, yeah, thank you. Yeah, I will. Yeah, we'll talk. Um, anyone else? Yeah, thanks everybody. And thank you for making this the overlay protocol happen. Thank you so much again. Yeah. Yeah, thanks everyone for showing up. My name is Ekno, or Daniel, and my project was to use the trust assumptions in the PBS reading. Yeah, so that's my agenda. So first off, what is the PBS even? It's a proposal separation, a proposal builder separation, sorry. So you have the evaluator and you split them now into two entities, the builder and the proposer. And why is that? So it was kind of a market force because the proposal has an intent to, to outsource their block construction because of MEV. So these builders are often like highly sophisticated, protect their good algorithms to extract MEV. And so they split against one another to get included in the proposals slot. And also, there's it's needed for dang shorting because, yeah, in the dang shorting, we have these roughly 16 megabytes blocks. And so the builder has to have like high bandwidth requirements. And the rest of the network just needs to verify we are data available to something, which you just heard of. That the data is available and that doesn't require a lot of bandwidth and computation power. So we need these highly sophisticated builders for that. Yeah. And like most people think that's like a PBS is like a future thing, but it's already there. And there's our protocol PBS solution, which is displayed here. So we have these different kind of builders which construct a block. And they send this full block to the relay with a bit. And this is just how much it's worth for them to be included in the proposal slot. And now the relay picks the highest bit from the builder and sends it off to MEV boost and MEV boost is just a program that's operated by the composer. And MEV boost selects the most profitable block from all the relays. So there are multiple releases, just one. But they're all like centralized. And then also and the relay sends the header, not the full block to the proposal or to MEV boost. And then the proposal signs this block header and sends it back to the relay and the relay then releases the whole block to the network. So this has some trust assumptions. And the first one is that, of course, the relay can see the whole block content, so the whole transactions. And as I said, there could be MEV included and really could just tamper with the block content to kind of generate this MEV for themselves. So the builder needs to trust the relay that they're not doing that. As I said, the relay is releasing the block, not the builder, not the proposal for the relay. So the proposal needs to trust the relay to actually release the block on time. So the proposal is not missing out on any proposing rewards. Also, the relay can send through different blocks. So for example, with the tornado cache, there's like a tornado cache transaction inside the block. They could just say, no, I'm not forwarding that block. I don't want to. And also the last one is that it's unbiased in pit selection. So for example, there are relay operators that are also operating builders. And they could be biased to just always select their block that their own builder constructed. So what's the proposed approach for PPS? So first of all, the builder and the proposer need to register before the proposal slots on an on-chain contract. And the builder also needs to stake some amount of it. Oh, OK. That's not good. We drove this place. I'll just try to show it like this. So the first step in this step, the builder is sending the block header and the bit to the intermediary. So it's important that this approach does not dictate what intermediary is used. So it could be a roll-up. It could be a modified version of the MVP boost relay or it could be a data availability there. And yeah, so all of those could be used. And now the builder sends the block header. So not the whole block, that's important. So in the other approach before, the whole block was sent to the relay. And now we only send the header with the bit. And the proposer then can select the best bit from the intermediary and receive the header. So in the relay approach, the relay kind of chose which builder to choose. And now the proposer does this. And the proposer then signs the block header and sends it back to the intermediary. And the builder then retrieves the signed header and releases the block to the network. Yeah, so as you can see here, as opposed to the approach that's currently used, the intermediary never knows about the block content. And also the builder here is releasing the block on their own. So I'm not the relay anymore. I'll go back to the whole, never mind actually. And so as you can see here, what guarantees are there that the builder actually releases the block on time. So the builder could just retrieve the proposer by just not releasing the block and the proposer then misses out on the block reward or just not pay the bit. So for that you need like these extra slashing conditions, which are that the, yeah, if the builder didn't release the block on time or the builder submitted an invalid block could also happen or the builder didn't pay the promised piece. And the first one could be proven to be through, we could assume that the builder always doesn't submit the block on time or an invalid block and then kind of say, hey, you need to include a transaction in your block and it calls a function in the slashing contract to kind of say, hey, this block is valid because if it wasn't, then you wouldn't be able to call this function. I know it's kind of maybe kind of hard to get, but yeah, you need me to repeat it to just say it at the end. And also there's the block added the builder didn't pay the promised piece. That's just done by the builder to send the fee to the slashing contract and there's like this challenge period where if the builder didn't pay and that's more time than the proposer can slash him. Okay, and so they are now like, these are to reduce trust assumptions now. As I told you already, the intermediary never knows what the full block can content so the intermediary cannot censor the block based on the content that's inside and also it cannot temper with the content because it doesn't have the content. And so for like the relay and the role of the intermediary left with two trust assumptions which are the intermediary does not collude with the proposer. So, for example, the proposer and the intermediary can collude to slash the builder on fairly in that case and the intermediary does not censor transactions. So for example, the intermediary does not forward the deepest based on some arbitrarily reasons. There are some different, there are some arbitrators and just trust assumptions. So, for example, spam protection, for example, the roll up to have some spam protection built in where the builders need to pay fees to the intermediary, to the person or the organization runs the intermediary and that could be a spam protection. And there's also a latency to consider. So, yeah, the latency kind of depends on the intermediary you choose because for example, really it might be faster than a data availability there. And also the builder need to state that they need to state is like a hurdle to entry. So it's kind of difficult to say how much you need to state because it can't be too much because then it's a hurdle to entry for new builders but it also can't be too less because it should be at least the amount of of ease that the builder is paying as a bit to the proposal. Oh, yeah, and we can introduce an additional DA layer for the roll up and relay just to reduce trust assumptions in product. Of course, if you use the DA as the intermediary, you don't need the extra DA. Yeah, and that would result in removing that heavy trust assumption that the proposal and the intermediary can pollute because the DA could attest to that the data was available at the time. And there are some future work includes like a searcher, a builder market, a restaking sort of item there, a dynamic staking amount. So the builder could just, when we wouldn't dictate what the state is, how much they would need to state, the proposal can just choose how much risk it's worth for them to take with the builder of how much they have to state. And there could be some extra stretching conditions like including this. Yeah, and that's it. Any questions? The leader, when you can make the product to a chain, or a product to be able to change, the product has to be signed by the proposal. Yeah, the header needs to be signed, right? The proposal. I don't think so, no. It's currently also not done. Currently also the proposal is only signed by the header. It's like the blinded block. How many? The proposal has from the leader to the header, the proposal has signed, and it's like the signing header to the header. Yep. The header, the invited to the block. Yeah, they give the full block to the network. Right. Yeah, sure. What do you think of the proposal? They should provide, I'd say, but they should be turning it back to the proposal. Of course, they can provide that. So right now there will be a full block. Yeah. Okay. Yeah. Yep. I do believe about how the manager and the user can look into it. Yep. Oh, yeah, sure. So what they can do is, so all of these intermediaries are basically just there for data availability. So, and let's not say I have a, I have the roll up, for example. And so the proposer kind of sends the, kind of says to the roll up to where we can, we can collude to the, to the, with the build, we can collude to slash the build up value. And what they will do is just not provide the sign header to the builder, but kind of attest to it that the sign header was available at that time. So these are like all the contrasted entities to attest to the availability of the sign header. And if they just say, yeah, it was available, but it wasn't at that time, they could collude and slash the builder for not releasing the block on time, for example. Look about a serial makeup chain. It's quite important to have some kind of interoperability between the client implementation. So for example, freedom, take-through, the data client can connect to Lighthouse become known and Lighthouse, the data client can connect to take-through and can do, but with prison in two parts of today's case. And so the purpose of this topic is to re-write the prison data client code to be compatible with the become an API. Instead of, of course, automatic lab internalize API. Because the API is an official API defined in a specification, which is an HTTP API, and we try to use it right now. So fellows were Pat, Avignon, and Ruv, which is not here, and me, and mentors were Wadek, Wadek, gave a talk two weeks ago on Tuesday and Jax from prison team. And so before the fellowship, so we've had multiple cases. So here we have prison-validator clients which can communicate with prison-validator client, but only by using a GRPC request, which is an internal GRPC implementation in prison. And for example, if you want a non-prison-validator client like Deku to communicate with prison-validator client, so it's possible. So a non-prison-validator client can send an HTTP become an API request to the prison-validator node, but the prison-validator node has to convert it into GRPC request to be correctly handled. So this was the situation before the fellowship, which was not possible. It was a prison-validator client to query a non-prison-validator node, and also it was not possible for a prison-validator client to query prison-validator node using become API. These two cases were not the same. So we had a few steps in order to remediate that. So the creation of a design document by that. Also because it's kind of a breaking change in the validator client, we need a kind of feature flag to be sure that you use this become API only when we want. So when you launch validator clients, you have to write a dash dash become API. And by the way, it's still a GRPC. Eventually I hope it will be by the way, become API. Then we had to list all the GRPC codes in prison validator clients to convert more or less easily into become API codes. Sometimes it's easy. There is one mapping and sometimes far more complicated. Then after write all the code to query, you cannot use become API. It was basically what we did during these four months. So like you said, the mapping is not always one-to-one. Actually, most of the time it isn't. Especially for, for example, they get duties met up there. The become API has three different endpoints to get duties for a proposal. They're all the same. But for the GRPC API, it's all in the same one. So we had to do some like translation between the GRPC bus trucks and the various API trucks. We also had to basically mix and match. You can see, for example, the proposed become doc uses two become API endpoints internally. To be able to use the GRPC. So what we decided to do was basically create an interface that takes GRPC trucks. And the reason we did that was to be able to not change the rest of the code as we work on it. Like I said, of course, I did a feature flag because we don't want to break present when working on it. But also to minimize the risk in the surface area of the code that we changed, we didn't want to change any of the developer code except for this very thin layer between the become API between the become doc and the developer. So as long as you pass the same GRPC trucks to this interface, the developer doesn't need to know which application he uses on it. You have the GRPC implementation and the rest API implementation that just implements this interface. And which one will be used? It's just decided by the feature flag that I'm going to explain. Some functions here you can see uses streams for GRPC. So those are basically add to emulate the GRPC stream because obviously the rest API doesn't have any stream functionalities. So to do that we just call the become API for every one second interval. Eventually those streams won't be needed when we get rid of the GRPC implementation entirely. So at the current moment all these endpoints have been implemented so it's basically feature complete. So so now we're basically in theory we're incompatible with other become node applications. You can use the present value of your client with not present bigger nodes like the human dealer consensus teams. And now we don't need to go to the well the become node still has this conversion approach but the theory is that in the future we'll be able to remove all this GRPC legacy code and just have a lot less maintenance by Berlin and a lot less bugs surface. So the next step even though we're a future computer there's a lot of testing to do obviously we didn't test this a lot in the test yet. We did it locally but like very like short term tests so we need some more first of all we need some more end-to-end tests and we need to enable end-to-end tests before the project was emerged. We didn't want to slow down the present team so right now the tests are just run after the present merge and you don't need to like the present team at all but to create equations in the future we should enable that before the project gets emerged. We also want to enable functionality and high security high as the special code patches for Prism since Prism doesn't have the developer didn't have any functionality to communicate without any we can load clients. The special code patches for Prism and we want to remove it and start testing with the REST API We also want to have more tests that mix and match by layers and we can change. Basically just test whether it's fully conformant or if there are still bugs to run out and at the end obviously the only reason we're doing this project is to eventually be able to remove this maintenance burden and remove all these translation layers between the GRTC and the REST API so that would be the final goal we need to remove all this biggest capacity I wanted to ask what stops you from removing the GRTC like is it the current compatibility like we're updating the validators or just because the converter is so much overhead So it's not we didn't test it enough yet so we want to be able to just have more testing done and not onboard all those consequences that I talked about so that we're sure that we don't need to go to senior operations that were not there before Thank you Any other questions? I just want to say on behalf of Precision thank you it's very useful and it kind of sucks we were kind of the odd one right there we did the live interrawf but I thank you guys and now that we're able to do live interrawf without the gates Thank you Good afternoon everyone I'm very pleased to talk about our work on defending gas member pool against the DOS attack so my name is Wanending and I'm now PhD candidate at Surchis University and here is my and here is my research group I work with my lab mates and boy and also my PhD advisor Dr. Tang and the outline of my presentations first I will introduce the basic problem we are working on and then we will go to the member pool DOS attack which we will mitigate and then I will introduce the design implementation of our designs and at last will be the evaluation of our designs so first we all know that in the blockchain there is a connection pool between client and the miner and if the transaction pool got some problem we can't get some confirmed transaction into it and the miner can't get some valid connection from it the blockchain will crush so here as the miner cannot read unconfirmed transaction problem it will cause some empty blocks generation and also will cause the revenue be lower for the miner and in the long term to say there will be less miner to this blockchain it is very easy to attack for the attackers and on the other hand if the clients cannot get their connection included in the blockchain also this blockchain will loss the client and so here we should we discovered the DITERX attack which is effecting the transaction by future transaction here we can see there is a victim connection pool we have two connections pool through the connection pool and it all sent by the Bob and then we have attacker node and here the attacker send the transaction which have the higher gas price it will evit the transaction one from that victim connection pool before and it will turn transaction to become a future transaction because here the nodes one will disappear and the nodes two connection will be in future and here so we already discovered two DITER attack one is the DITERX which I just introduced and the second is DITERZ which exploit latent overdraft connections and to sum up these two attacks all by sending the high price but unprofitable transaction to evict some medium price profitable transaction so how we can defense those attacks in the memory pool we just decline those connections sending by the attackers and to the implementation we are first to check if the transaction is future or or latent overdraft connection and then if it evits some connection in pending pool then we will intercept this process of eviction and decline those incoming connection and for the evaluation part the first evaluation is about the connection pool security under our fixed connection sequences we just set the initial state of the connection pool with 4144 pending connection which is now the full state of our guest memory pool and then we attack the memory pool by sending 200 future connections to see if the pending connection will get less and the future connection will be more and the second attack is to send those future and latent overdraft connection combined and we see if there is some over latent overdraft transaction appeared in the memory pool after the attacks so the result will we obtain is that there is no attack connection after we set the defense and the second evaluation is the performance evaluation we just use the test case in the on the github and it will send some connection workloads and it will test the time of the workloads and the local the I think it's the there is a time between different between the old time and the new time here's the result we can see some of the time will increase but some of the time will decrease a little and it will not increase a lot so the benchmarking is significant is not significant worse so and here is another security evaluation that is an ongoing work we just put out this open problem so is our current defense is security enough or secure enough or are there any unknown crafted transaction circumstances can deny those guest memory pool service and we think the fuzzer will solve this problem in the future and we are now trying to use the fuzzer to get more concrete work and very grateful for the support of Ethereum foundation and my mentor Marius he helped me benchmarking a lot and put our code on the GitHub already but it not merged yet and I think it's a good time to have any questions thank you well hello everyone thank you for being here first call I know we just tried for you not to fall asleep yeah so I'm Ricky my name is Alex I'm from Buenos Aires happy to share with you what I've been working on for the last couple months yeah so it's the implementation of the e-commerce player for the interface for the client so first of all for those who want to check out this is the github repo that I've been working on I won't display any code but if you want to follow the code here's the QR yeah just talking about my first steps on the EPF so we have this project ideas file so I just checked what can I do using Rust Alex took us we did some MVB stuff and MVB repo for some utils and then also some consensus layers so I started checking both of them did my first update on MVB but then the he was like client launched so what is Helios anyway right so Helios is a Rust based Ethereum like client made by the existing team led by Noel Sutron and so yeah it showed up just explored the issues and started working on some for those that are not familiar with what a like client is it's basically basic implementation of a human node to connect in a trustless manner with the Ethereum network in low resource environments such as a phone I know that may be a few years away but maybe a browser okay asking first what can be improved so Helios right now is fetching consensus layer data using alchemy or if you are you can set the app that's not cool as you may know basically holding a trusted assumption of what the provider is sending to us yeah so definitely something that can be improved so how do we do it you just have to implement the Ethereum because it's a layer pretty big network interface yeah not so much right and adapt the architecture of the Helios by client to work with maybe like the already working centralized provider fetching data and the new peer-to-peer interface maybe synchronize using the provider but then pass to the peer-to-peer network by the way like clients use consensus layer data to verify the state it uses the state routes it's held in the consensus layer data class so we can verify the state execution yeah easy stuff yeah so my initial research was on Ethereum POS architecture yeah spent a couple weeks doing that then the lifetime protocol and the consensus layer peer-to-peer interface in the last few of these I made like a short post about both of these topics that really helped me to grasp these concepts also like the main driver for me writing these topics was like I had to submit an update for the API so why not just make a one post about it right yeah so that was really helpful for me and yeah also like to help more people understand what I'm trying to do following that initial research I started doing a deep dive on the peer-to-peer stack so we have the no discovery protocol basically we need okay no way I'll go into depth on each of these topics we have the discovery v5 we have the peer-to-peer which has a gossip sub protocol and the request response protocol starting with the v5 so we basically need a way to find other peers in the network this v5 works using only the UDP transport so it's inspired by Kadeem Lee distributed hash table but we're not using that because we have a few note records which are not like multi-addresses these are flexible records and so you have to make this adaptor to work with libpd also yeah so this is the first thing I started implementing I'm working on my minimal setup we then have libpdp and why are we using libpdp well basically it's a clear in the specs that the tcp transport should be must be supported for every client it's a modular networking stack you can basically use that as a base layer but then implement your own protocol some of that we have gossip sub so gossip sub is a general protocol which is a published subscribe protocol so as an example you can be subscribed to multiple topics and then receive messages on each of these topics that you're subscribed to you can also publish to a certain topic yeah basically this will be the way to that message data is gossip between notes and then we have the request response protocol or breakgrass protocol this is mainly used to fetch a specific data from your already connected peers so you have established a connection and maybe you want to just ping the note if it's up or just fetch some specific blocks if it's not an archive note you're not going to be lucky but maybe it has the blocks that you're asking for so that was the research part now let's get to the hands on I made an initial implementation to get this network setup working this was basically for me to abstract from the Kilius codebase getting into like a testing environment closer to something where I could iterate fast and find bugs faster yeah make like the small examples from the libp2pere code examples or the dsp5 code examples too yeah so how do I mention this between my initial challenges dsp5 needs to be adapted to libp2p since the regular stack uses mtns which has multi-addresses but since we have ENRs we have to make an adapter between regular peer IDs using note records I tried to implement like this primitive non-generic data structures so this is because like a lot of the code I've been doing I've been looking for it to the Lighthouse client and so Lighthouse client has these generics to deal with every network in spec so it handles not only main-net but test-net and also like I think noses so I tried to abstract from everything so this setup can be just as simple as possible to handle on the main-net data and yeah this is possibly a mistake I tried a lot to keep code simple I think I spent quite some time on that that's why I'm saying like possible premature optimization I think I should have focus more on getting things working for doing that yeah a couple more challenges yeah definitely peers are not that cool so for a bunch of reasons you will get disconnected yeah at the beginning it was peer scoring so me not supporting correctly some certain protocols from the request-response protocol maybe also like just the if you cannot be pinged you will get shut down instantly but also this is different on every client so another challenge was that I had to test with multiple client implementations because right now Lighthouse works but maybe other clients are not working correctly with this setup the current capabilities of this project are I can find peer-assisting scoring files for a good note I can subtract and receive the investments of topics and I can communicate with peers with the request-response protocol alright so next steps so this is something I'm currently working on I still have to adapt the current codebase to mid-healer standard so there are a bunch of request-response or at least modules from Rust that I've been using that aren't the ones being used in Helios mostly I would say SSC encoding crates but also I will try to use some Ethereum consensus crates which has more standardized types so I can integrate them easily easier into Helios I have to implement a peer manager so right now I only run an initial discovery setup and then connect to every peer but once you get disconnected then there's no another discovery step so definitely a peer manager has to deal has to do something to keep this amount of stable peer so you can rely on keep receiving data of course work on the Helios side to adapt the architecture all right so takeaways from all of this work likely for me internet communications are no longer magic so I'm super happy about this I got to understand how no tech can communicate between themselves but debugging networking stuff is super hard I think I should get some better tooling for this I think I will do some research on networking debugging tools special thanks to Alex Stokes great hope from the he helped me a lot through this Nauseatron from 96NC he basically handed me a lot of self from Helios and they let us see for a great job that they do in their clients yeah thank you all for allowing me to explore this garden with you all so thank you thank you so much all right my name is Kevin and I want to talk a little bit about the past four months I worked on consensus client reward APIs and as the name already says the motivation behind it was to develop APIs that provide detailed reward data for Ethereum validators and this is the current state of the art before he implemented the project and it still is and I want to share with you what you can do to make sure that you can explore different tabs where you can watch the data you are getting so for proposing blocks for making attestation or for participating in the sync committee and on the right side you can see these attestations are even split down to the source target and head votes for making attestations proposing blocks and participating in the sync committee and the project was or could be divided into two big milestones first deciding the beacon API endpoints beacon API endpoints is like a standardized set of APIs for the interoperability between beacon implementations of beacon implementations and the second big thing is to implement those endpoints into one consensus client and we which means me and another fellow called and C decided to go for a lighthouse okay so what are the challenges and there are mainly four challenges the first thing was to understand basic questions regarding the rewards so questions like what are the attestations what is the sync committee what do the validators get paid for the second big challenge was to have an alignment between the quarters and so everybody is happy with the situation of the beacon APIs and how they should look like and the third challenge was rust so we all knew to so and C and I are new to rust and we haven't had any prior experience and the fourth and last challenge was understanding the huge lighthouse code ways so coming to the conversation between the core developers of the consensus clients it was really time consuming but there was also some really great feedback for example from Sa we he is working on the Nimbus code base we received the feedback that he wants to to show for the attestation rewards the ideal rewards paid which means if the validator voted correctly for target and source and correctly means they voted in time and they voted right so how does our APIs look like so this is the attestation rewards API you are able to provide the validator index you can see it here and you are also able to provide a public key if you want to which is here and you can insert the desired epoch at the end and the output looks like the following so this is the ideal rewards I already mentioned and it is array containing 33 objects from effective balance 0 to effective balance 33, 32 sorry, in one eave increments and you can have a look at what's the ideal rewards paid if you have an effective balance of 32 or higher and you can compare to the other rewards you are receiving and you can see that the public key got converted to a validator index and also the ideal rewards and the actual output of the rewards are the same so there are no opportunity costs the second API for block rewards look like the following you can provide a slot and for this slot you get the proposal index and the rewards split down to each of their parts and for the third and last endpoint similar to the attestation endpoint you provide a validator index or end a public key and provide the desired slot at the end and you get these kind of rewards paid for in the same committee okay what is the future of the project look like so we implemented into Lighthouse and it should be released in about one month but I also spotted an issue at prism they also plan to implement it so I think there's a huge success factor for our project and there is this one guy Patrick working in the weekend chain and he's coming to the Lighthouse Discord and mentioned that they want to implement our API which is really nice because we use beacon chain to verify that our rewards or our data is correct and now they are the tables are turning and they are using our API which is really nice so another honorable mention and there's this guy called Alex88 he's some random guy in the Lighthouse Discord and I think this really describes the spirit I experienced the past four months so he just took our open PRs and implemented them into his Lighthouse client locally and tested them and helped us with debugging without any benefits he received for that he was just interested to have these APIs working and here I'm talking about a bug that I fixed for multiple entries for one validator which really shouldn't be the case yes I'm closing but thanks to my mentor Michael from Lighthouse thanks to NC for insightful discussions and thanks to Joshua and Mario for running the API that's important thank you thanks for the echo I think and to start this slide last minute because I actually thought that we might have to do a slide so the project I've been working on the API for 244 contested as client but you're going to realize most of the slides are not related to this because there's some challenges on the spec update there's been changes to the spec and as a result I have to also change my implementation as well and it's a bit of wait time so I kind of jump between different tasks so I'm going to just talk to a lot of them so I don't really have a standard project like most people do it's just a bunch of smaller tasks hopefully I can explain it a little we have enough time yeah so I've mentioned that it's just a bunch of smaller tasks mainly focusing on better 444 which is proletarian charting and the Lighthouse one sense is quite and yeah the goal is contributing to implementation testing that's the goal and also learning lessons quite a big challenge as well so yes just a bunch of links here I'm going to show some diagrams if they're useful so that's kind of like the main task that I was working on I've been building APIs for 444 which is so Gabby and I initially picked this one because we thought it's something that we haven't started doing spec on so there's an opportunity for us to start from the very beginning looking into spec drafting the spec and getting feedback just starting to send it into the core depth process that was cool and initially we thought this might be like a smaller task because based on the discussion between like terrorists and a few other guys it's going to be like one field change to the API so we thought this might be too small so we need to add a few more tasks to our project and it turned out we're completely wrong this is still a working progress it's I don't know if it's like 50% done or less because there's some recent changes to the consensus spec that required me to do a lot of update to it I'm going to show, quickly show a design but it's mostly going to be wrong because there's a recent change I'm not sure I think for people that are going for it before it's quite a big change on decoupling the block on gossip so they kind of change a lot of things on how we like how we can do the the build-up flow because now we also require to sign the blocks so that's quite a bit of change and I still haven't figured out exactly how we can do this so this is definitely a working progress I was slightly tempted to steal Daniel's diagram because that's a lot better this is slightly different so this is me trying to understand what the end-to-end flow looks like on the builder side I don't know if it's too small so this is an attempt to try to understand the end-to-end flow of where the block is coming from and going to and this is quite our data now so I'm just going to spend a bit of time just to get through it because I've actually done a draft implementation in Lighthouse as well which is all going to be changed this bit is like the proposes the three of nodes so there's a bigger node, there's a local EL and there's a map boost and there's a validator client as well and that's kind of from the user side where the user send a block transaction it goes to the execution node and then it gets broadcast or actually announced to the other execution nodes in the network and that's how we get into the mempool which I still like a magic to me, I don't know what's the the block mempool is going to look like still so once they get the block on to the mempool and then there's also searches and block builders that will try to shoot transaction, look at transactions and blocks in the mempool and then some of the block and the block to the relays so from the proposer side there's the flow will start from the validator where they will try to get a blinded block and then this bit is similar to what Daniel just explained basically we have to go to the mempools to get a get a header back but the different here is that we also now need to get the block and block component back so that the validator gets signed so there's a lot of discussions on whether we want to get the full block full block or a blinded block block so right now this is based on having like a blinded block flow so what happens is the validator is now retrieving the block and then the relay will return the blinded block which is the header plus a blinded block back to the validator client to sign and then the client will then be able to sign the header and sign the the blinded block as well and then send it back to the relay and then the relay will then broadcast the block and blocks and then also return a review payload and blocks back to the beacon node which the beacon node will then broadcast again so this is the final so it will be good to show like the kind of work that I've been working on yeah so next one so there's been quite a few tasks related to this just putting a bunch of links pulling on that's super useful but there's spent quite a bit of time with Gabby on the spec updates so initially we started with spec because we thought it would be good to have the specs out early to start some discussions so we drafted the spec and then we realized oh we also need to update the cappella stuff as well so it's kind of like a dependency so we extend our scope to also update cappella spec as well and that PR should be merged and then now we also have to connect PR separately now because before it falls we push it back to the next walk so that's the all working process and progress and I have a draft presentation which is mostly outdating now I'm not going to go through it too much so that's the time that I spent on the previous update so I ended up spending a lot of time on the stuff so I'm going to go through that and this is kind of like an updated version of the previous one except that it's become more complicated and potentially outdating now as well so I need to another version but the main thing is that we experimented separating the publisher of block and blobs from the beacon node side which turned out to be a bit of a problem and because of it becoming complicated because we needed a way to the beacon node to unbind the blobs so that means that the client will have to always have a beacon node which is not always the case if you use something like vouch or if you have a sentry node where you have multiple beacon nodes that fall back and then there will be problems with unbinding the blobs so this could be a slightly more complicated we haven't found a solution yet I think right now it might be easier if we just combine block and block in the way this is also not done and then the other component in my project is the testing bit of EAP44 initially at the beginning of the project I looked at the EAP44 interoperative code created by monthly optimism I think and I think my house was missing in that report and I thought it would be cool to have it because it contains the docker.compose file for other clients I get clients as well so you can test the interoperative code and also test a few main features of EAP44 I think Moffy and Roberto they had some tests that I related to for EAP44 so I thought it would be cool to add that so we spent a bit of time with Moffy on getting that request merge and also our credit.compose for Lighthouse to join the EAP44.net v3 which is probably not that useful now because we've got v4, v5 and then I also did some work on the download block API in Lighthouse just to help with testing because before we had that API we were mainly using the request response interface to create the blocks and that turned out to be a bit of a challenge similar to what Ricky explained about the peers are not that cool so you have to implement the topics in Gossip so that was slightly painful but I think we've got that as well but now we can kind of make it a bit easier to test because you can just create for a block and then get it back quickly show the response from that API but now it's all going to be changed as well I'm pretty sure this is the next slide so basically this endpoint allows you to like provide a block ID or like a stop number and then get back the the blocks this is like Apple with a stop node blocks so like the block arrays empty and then you also get that empty that create a group proof that doesn't exist anymore in spec so this needs to be updated and there's also Apple there's Apple Apple blog as well that's also my list to update and then that's the testing and then the next thing I looked at was understanding Lighthouse and the work on Lighthouse which are like less related to 444 of course some notes here on like my mission attempt to understand Lighthouse and create a group proof it's bigger than what a slide can so it's too small it's already interesting yeah I'm going to go into too much detail but this is kind of like the what happens when you start the new Lighthouse node you have like a main like main component where it creates if you can know it's got a bunch of stuff in it it's got an API server, metric server then a networking service where you'll get like messages from peers and you have to then pass it on to other component to process and in this diagram I was focused on understanding the beacon processor which maybe look after like blocks that comes in and then so what happens when you get a message is that the networking servers pass on the message to a router which then pass on the message to a beacon processor that spawns like we'll get threads in parallel and process the tasks so this initial exploration turned out to be helpful because it helped me doing the next thing that I was about to mention which is the another issue on Lighthouse that backfuel sync so backfuel sync is a mechanism that happens when you use checkpoint sync to another beacon node you download the latest finalized state and then there's forward sync that happens that you can sync to the latest spot but then there's also a backfuel sync which will allow you to sync all the way back to Genesis and I think there's also an API to just limit that to the week subject to three point which is like five months so that will decrease the storage requirement for CEOs but this one is slightly different, this is about bread limiting the backfuel processing and the reason for that is because there are some some people in the community that raise an issue about the CPU consumption when they initially sync it looks like the CPU just gets handled all the backfuel processing because it doesn't get really limited so just keep processing it until it goes back to Genesis so sometimes it requires a lot of CPU power to do that and the point of doing this is to slow down the rate limiting which is probably not super important for most people you run it by a data you don't need all the historical data it might be useful for people like that one's archive node that we want historical data I actually added a feature to also allow you to toggle that on and off so if you run the archive node then you don't want it to read limit you just want to do it as fast as you want I guess you do that but for most people we're going to read limit by default so unless you want to override that I'll quickly show a design that is slightly too low level I'm not sure if it's going to be useful for everyone but this is the existing how it works within Lighthouse so probably not we're going to too much low level I mentioned before that there's a network that pass on the message to a beacon processor and when backfill happens we get the backfill batch in the blocks from an event channel and that goes to the beacon processor and then process it immediately so this is what happens currently just keep looking through and see if there's any work coming in which doesn't get really limited so it's constantly looped and the solution that was proposed by Lighthouse team was to add a ray limiter in between so rather than processing this immediately we have an intermediate queue which is probably going to get too small I'm not going to go into too much detail I'll share the slides as well but basically what happens is now there's an intermediate ray limiter queue so whenever we receive this it's going to send to that queue and we don't process it straight away we process it at a later time schedule right now we do a bit of performance benchmarking just to see if you impact right now we're doing a slot so we process it three times so every batch is like 60-4 books so we process three batches in a slot so it's not like previously where we just do as fast as we can usually go to like four or five batches per slot but now we just slow it down to three so it happens at six, seven and nine or ten seconds after the slot starts so it doesn't impact the violate as much because it doesn't happen in the critical window so it's a slot so this is now like a schedule processing instead of like so that's one thing and then the most recent piece of work that I've been working on is the blob sign in bit so that's also some new work that came up the freezer blobs the coupling of plotting blobs so now we have to sign the blobs that gets propagated to the other nodes so we're working on this now and this is still not finalized yet because there's been discussions on the blob sign as well whether this happens as a single confine request or whether it happens separately to the block so there's open PR specs in the consensus specs so I have a PR here but still also it needs to be updated depending on what the outcome is which has like happened to most of the work that I've been doing so mostly iterations but I don't think it's wasteful it's just good learning like I mentioned, yeah I love the work of progress there's lots of spec change along the way and sometimes change can take a while because also I don't want to get the spec module to clear and then modify later to have some influence from other client teams and also the people that have been doing experimentation they might have some takeaways from it so it's generally not a bad thing and it's good to have draft specs out there at least with stuff from discussions and I've also explored that it's quite useful to also implement a POC before the spec is finalized because I can find sometimes I find it useful to discover issues which happens most of my peers anyway I'm so happy that some of them are merged some of them are still dropped or closed sadly yeah, that's all especially thanks to the EF and Josh Mario on the health team Sean, Dieter and Paul very helpful everyone that's helping me as well thank you yeah, so hi everybody I'm B we met already so part of the fellowship I was working on the boundary implementation for the new account obstruction proposal in Rust so the motivation was this quote from Vitalik, he said account obstruction has for a long time been a dream of the Ethereum developer community there are two things to this one is that account obstruction has been a dream so it is something that is very beneficial and it also it also says that it has been for a long time so there were many attempts to do account obstruction there were three improvement proposals so it's been so I think that first improvement proposal is like many years already but there was like problems with first two proposals is that they wanted to make some changes to the EVM to add some new opcodes and things like that which is like was a problem because it was never the best time to do like these big changes to the EVM because there were like other priorities like the EIP-1 559 or like the merge and now there is like protocol shorting so the last proposal that was introduced 4337 is trying to do account obstruction in a little bit different way so basically the idea here is that avoids consensus layer protocol changes and it's that relies on a higher level infrastructure so basically the consensus and execution clients don't have to make make a single change you just introduce the new entities that work with execution clients and basically you can run account construction without making changes to the core clients so basically you have to introduce several entities in this model so one are the bundlers which I was working on then you have like this entry point smart contract this is like the singletons smart contract so there is only like one on the blockchain and basically all user operations like garbage already mentioned today goes through this smart contract so basically bundlers submit the user operations to this smart contract and then this smart contract verified signature and also calls the pay master so pay master is the entity that can sponsor the gas for the user operations and basically execute this user operations which contain the smart contract calls for that user operation and user operation is basically very similar to the transaction it can also contain other like signature skins so back to bundler a bundler is some kind of note equivalent component for the account construction so it can be run by anybody it has like several jobs it receives user operations for smart contract wallets it then validates this user operations in the bundle which is like a normal transaction that is in the end submitted by the bundler with execution client and it also has like this peer-to-peer protocol to exchange user operations with other bundlers so the goal for this fellowship was to like implement the bundle in Rust like to do a full implementation from the ground up and to pass the bundle spec test that were released by Ethereum infinities this team in December and then to try this bundle with one of the smart contract wallets so basically these goals were set in October so now we are in March already and some goals were achieved but not all so basically we'll go back to that so this is the architecture for the bundler so I wanted to like split some responsibilities into different components this is like similar to what this Aragon is doing with their execution client so basically I have like three main components which is like the JSON RPC API then we have like user operation pool which is similar to the transaction pool and we have like this bundler core logic that is actually bundling user operations and so we have like outside of the bundle we have like Ethereum execution client and also Ethereum execution client but the difference is that here is like the peer to peer protocol to exchange user operations with different bundlers and here we have like when the bundler submits the bundles to the execution client so this architecture was interesting for me because let's say that you have like specific logic in how you want to bundle the user operations in the bundle so you can do you can just sort the user operations and bundle like the ones that pay you the most or you can like support one pay master or you can like have different logics and with this architecture the idea was to like if you like run components separately you can just spin a lot of the bundler instance with the same user operation pool and with this new bundle instance you can just have different logic on how to bundle the user operations into the bundle there is also one thing that when the bundler submits the bundle to the execution client they have to communicate with the both producers or use flash boards protect to prevent front running because like when bundlers let's say they find some smart logic how to combine the user operations bundle and when they submit the bundle to the execution client they submit it to the public transaction pool other bundlers could see what did they do and just steal basically the bundle and pay more fee and basically from the other bundle so there are like some solutions to that yeah so here are like the details what had to be developed basically sanity checks, simulation, mempool, reputation, interview protocol and communication with execution client execution layer nodes maybe I didn't select the best color for that but this thing is darker this thing is darker and this thing is darker so basically the dark things are already implemented not so dark things are not so sanity checks these are like simple checks when the bundlers submit user operations so basically they just check if all the fields are set correctly every bundle also has to simulate user operation this is basically calling the handleOps function of the entry point smart contract and then the debug API for the gap is used to basically get like information which opcodes are used for this user operation because some opcodes are like forbidden because there is a time when bundlers receive the user operation and then there is a time when they submit actually user operation as a transaction to the node and between this time like some things can change like the block numbers and things like that and basically if the user operation contains this block number after sometimes this user operation could be invalid but at the submission time from the smart contract it could be valid so basically some opcodes are forbidden and there is also like replication model this is basically each bundle keeps reputation for paymaster factories because it's like some paymasters wants to cheat it will get banned by the bundle after some time if like it wants to submit the invalid user operations and this is basically to prevent the denial of service attacks so basically the mempool is already implemented there are like two implementations one is just simple in memory so and the other one is the database so it is persistent if the bundler shuts down and also support for multiple mempools is already implemented because like I said there is a single entry point smart contract but it can turn out that after some time it will be like one block in the entry point so the entry point can be upgraded and there is like this time when the entry point is upgraded basically both entry points will be used for some time before like all smart contract wallets use the new entry point so the bundle should support multiple mempools and also the idea is that everyone can define their own mempool which has like different rules which user operations are allowed but at the moment there is like only canonical mempool and so peer-to-peer protocol was defined I think it was three weeks ago the idea was to use lead peer-to-peer and SSZ because the idea was I mean the lead peer-to-peer is like I think that the execution client the peer-to-peer is meant to also replace that peer-to-peer like in the future so the idea was to use like for the bundler three peer-to-peer so after sometimes the peer-to-peer protocol won't have to change um and yeah that's it so here are the spec tests that are like a lot of fails um like some tests are passing like this forbidden opcodes also some tests are passing but the basically it took a long time to set everything up um but now it will be much easier to convenient because like for some tests just implement one function and check like one rule for the user operation and it's like it can be like 15 lines of code to just pass some additional tests but for this just to run this test you had to like set up like docker files you have to run the gath node in development mode you have to like deploy entry point contract on each run and things like that so it took quite a long time so in the future the plan is to finish this implementation and pass the remaining tests there was also the idea to develop some client library in Rust for the user operations um or extend itters rust library for that um because the itters rust library is very good but if you want to do some specific things or maybe things that are more new and not so much used by many projects it turns out that this library probably won't support that functionality so you have to do it yourself and basically the idea is also to use the ref in some way so one way would be to add a bundler component to the ref or because ref is developed like in a very modular way with many crates it is also possible to reuse like one of the crates of the ref maybe for the dev peer-to-peer protocol because basically the bundlers need to listen to the execution layer node to see which user operations were already included in the blocks so they can remove these user operations from the mempool so there was like kind of idea to show the demo but now that I'm the Mario computer this won't happen but if someone is interested we can talk later yeah so some links so I was working on the bundler with Will he was also a fellow that was working with that he's actually very good in Rust so it was interesting to work with him so the mentors were Hoaf and Dror I don't know why I didn't put his name there Dror and Hirosh and here are like some links to check so this is like bundler implementation and anyone wants to ask some questions after the if member and that's it, thanks okay hi everyone so my name is Andri for my project I did a kind of self sustained work something called capability monster so first like a little bit about me I think I did an intro but there's lots of new faces so I graduated a few years ago I spent a year and a half working as a company on the distributed database on the kernel team and over the last year I've been doing application level research at this place called 0xR which is some interesting experiments there I'm happy to chat after so a little bit of background of why I'm interested so how this came about so I got interested in core development and heard about WF long Merge it's just an amazing technical achievement and then after the Merge I got interested in MEV so one thing that was kind of like yeah I call my attention was the adoption of Merv Boos like and these things relays popping up and pretty significant adoption in a lot of just blocks being delivered through essentially like one relay and one repository like one single source code and so yeah I got interested in this and maybe in the program then I got interested in short term mitigations because now that there's in protocol BSI soon but what can be done in the short term my interest and I stumbled upon the relay monitor repo by Alex Stokes and kind of this was also on one of the suggested projects but so I started in it and yeah I expanded the relay monitor but first why relay monitor? So I'm going to try to keep this very brief and we kind of talked about I think there was a mention of MEV Boos relays before but this is kind of the reality right now that there's barely double digits of relays and a pretty high percentage of blocks being delivered through a relay so MEV Boos really is an out of protocol PPS implementation where blocks are constructed not by validators they're constructed by builders and then forwarded through this relay that is required for multiple reasons but the fundamental reason is that builders and validators don't trust each other and yeah like at a high level like validators opt in so it's opt in they don't have to run MEV Boos but a lot of validators do and the relay kind of signs up the MEV Boos relay signs up to what it promises to do is it delivers headers validators to proposers when it's their time to propose and it kind of signs up to deliver payload so it promises hey like if you sign over this I'm going to give you the payload so then you have a block and then you've got proposers and a proposer in turn like they also sign up to do certain things they enter in this kind of contract where they say well when it's time to propose a block a validator says okay what kind of header do you have for me and like how much value can you give to me how much MEV did some block of their extract and then they kind of wait for the payload and hope it happens hope it comes there's some things here that I already mentioned like dominance of MEV Boos another thing is that MEV Boos is just like another to go binary so there were already issues where because of improper validation by the relay people could submit blocks that had invalid builders submitted blocks with invalid timestamps and they got forwarded by the relay to the proposers asking for a bit but then proposers checked themselves were like well this timestamp is incorrect and they had to fall back on local building or like yeah not bad but still just see that the relay is in the middle if it fails to do something there's direct consequences to the proposers who often communicate via this relay so yeah basically the relays are in a privileged position so what a relay monitor does kind of fundamentally is it connects to relays so you specify which relays you want to monitor and it checks the parameters so I'll mention how it does it it can check the payloads that are delivered by the relay and it computes scores so the scores I just I kind of came up with them but you can do multiple things there and after doing all that it exposes an API for records and stats so like somebody the point is that to surface this information out for anyone who cares for instance a validator who wants to get some influence on what it's doing and yeah this little point is not that important it's another way to kind of allow validator input into the behavior of a relay so a validator can make a post request and say look here's a block that I received from this relay I signed over it, here's a payload something went wrong and you can kind of prove that something weird is going on there's just a simplified diagram again what out of protocol PPS does and what this MFBOOS relay does is that it separates there's a proposer and block payloads there's different APIs here and so what the relay monitor does is this thing over here it talks to the relay API the proposer API and there's an optin way for proposers to submit in there so it's a watchtower on the side I've mentioned this already but there are two pieces of out of protocol PPS first is headers and then block bodies because you can't just have builders submit the entire block body because otherwise the enemy can get stolen by the proposer so for the demo purposes I'm going to show header validation so a header can validate header theory received from the relay basically it's a header and a bit I use them interchange what I did I started with Alice's Reboot as the foundation I added a bunch of stuff this is just a subset but bit storage and analysis so like every bit that it just stores everything that it can find from every relay so this is the version of the headers which I'm going to show it allows for time based queries so you can ask for old bits how many bits did the specific really deliver between these slots you can actually get records of faults there's scoring that I implemented for reputation and bit delivery rate just pretty simple proof of concepts and there's also these operational metrics that are exposed I wrote up a spec it's kind of still a work of progress and then I created this builder bit because I wanted to simulate map boost relays that are faulty because even the one that's a Polio, the one that Flash was run it works when it doesn't really fail that much so I wanted to simulate a bunch so I just wrote up this thing that can simulate faults I have a few quick demos I recorded them just for the sake of simplicity so I don't keep swishing back and forth is this big enough? cool so first pretty simple you can see endpoint, monitor, fault stat also I'll mention that I put up this back on the API that relay monitor that plays XYZ I just typed this up so you can check it out the demo just shows requests in postman but it just implements the spec I just wrote up a bunch of stuff here so monitor, if you want fault stat's endpoint you send the request and you get back data that looks like this so it's like a report for all the relays that are monitored that gets indexed by public key so this is like a relay identifier like that you can see a host name this is the flashbots builder and over the last whatever slot you can specify the slot bound it delivers 914 bits so pretty simple so far sends the request you can do individual relay so nothing you can specify a public key in the route and you get stats just for a single relay so pretty simple here for example zero fault for this the other route is like the actual records so you can request a record report first it was the stats report, this is a record report so it actually gives back a list of consensus in dollar bits so it's like a relay who behaves and the relay monitor attacks look there's a bit that violates it's invalid with respect in this case you can see the parent hash it's invalid so it recorded a bunch of bits and dumped them into the database and you can query for it it gives you some info like the proposal pop key here slot and what exactly was wrong with it and it's grouped under consensus and valid bits so there's different faults and the relay monitor can track all of them so there's a list of all these faults that it just tracks let's see what else is here yeah same thing like per relay record other metrics I'm just showing validator count so these are like relay operation metrics it's like how many bits that the relay monitor analyzed how many of them were valid so pretty simple endpoints just like an API okay so this is kind of stuff the next thing I wanted to show is some of the scores so here this is an endpoint slash score slash reputation and this what's happening here is basically the relay monitor uses the information that's dumped that it has all the records of all the faults and it computes like a trust score in this work so right now the score is pretty simple it's just some exponential based on recency like if the fault is very recent the more time passes and if it really doesn't misbehave the score increases so in this little demo like I show so I send the request like this score is 100 this is like score 32 for this like fake relay that I'm running now it's jumped to 39 so every slot there's no faults so the score keeps increasing little bit by little bit let's see what okay now it went up to 45 so you get it really doesn't misbehave score increases so what I'm doing next is I'm spinning up like a faulty relay I'm just using this like a little tool that I made and now I'm enabling some sort of fault so I'm injecting fake faults in this case there's going to be some sort of like this oh yeah I'm using the wrong public key so the bids that this fake local relay that I'm running the bids that is going to be delivering are going to have some mistakes in it so they're going to be signed but the public key is going to be wrong so the relay monitor is supposed to detect it and so the score is 45 and now score jumped down to zero so like penalized it and detected the bid they figured out that it's very recent and they like dumped it all the way to zero yeah because it's using it that public key similar and so what happened there is so the score went back down to zero because it's like a critical fault right but then you can kind of imagine that maybe it's like a bot in flashbots relay or whatever so it's got penalized I shut down the relay so imagine like people just like this connect from it or like the bot has been fixed and so now if there's no fault the score's going to keep going going back up so that's pretty simple and yeah I think the other demo I had was just like there's another bit delivery score I'm not going to show it just because it's it basically just computes the ratio of in the last and slots how many bids were delivered so like for instance you can maybe imagine if out there might use this to say well this relay just stopped delivering bids like what happened I actually saw it was funny on Tuesday since we got upgraded the flashbots relay just stopped producing bids like it just like for for a few hours it just stopped so like you know this ratio of the delivery score which is probably since like the relay's not delivering bids yeah so yeah there's a few links that I wanted to it's just like some work in progress repos like the little fuzzer my implementation like my fork of Alex's repo that's this work in progress I hope it's going to be merged back into the original repo so it's all in the same like su and tm back into like yeah just aggregate mine and Alex's repos I just forked it in order to commit like working progress code without waiting for code reviews and lastly I wanted to finish with this little meme but like I think MV is very interesting it's really like a pretty big problem or challenge and it's very difficult to design like things that solve so this is not really about that it's more like illumination of activity like kind of seeing like what's going on and potentially this like watched hours can act as sources of information validators to see like what's going on with what's going on with the specific relay it's not much but it's on its work yeah it's like finally thank you Josh Mario for the opportunity to be here and thanks for Alex for starting the original review yes yeah yeah yeah yeah yeah watch it I think it should be both but I imagine that it is probably useful to have like a kill switch kind of thing like I remember coming across some designs that suggested that if something goes terribly wrong the consensus client or something just shut off access to yeah just shut off that obviously there's like challenges right that people can grieve and cause a lot about their years of disconnect from Mevoos but I think it should be both I do think that like for things like the delivery score I think it's less important because it's like you know maybe like a value that it wants to maximize their profits wants to you know automatically switch to the relay that delivers the most bits maybe they can like fork Mevoos themselves or maybe do it manually or for something like faults I would say that I can see it being useful to be in Mevoos so Mevoos can like disconnect as soon as possible from a remade that's faulty until the score goes back up based on you know no faults and like and faults and last and so on yeah it's a project on it it's a project on it no it's um so I have a thing running with Sepolia like monitoring the building there and the stuff that I showed so as I mentioned there's just not a lot of interesting activity because like that build it doesn't get that much that activity and it doesn't fail so much that's why I came up with this like fuzzer to just make artificially failing you guys but no I yeah I've been monitoring the one on Sepolia that's left what's really I actually think like I guess I might go and watch it it would be very interesting because they kind of force Mevoos operator to relay or it doesn't know how well if you're releasing this yeah for sure I think so I think just some things like testing remains and then ideally like it's merged with Alex's repo we just figure out how to combine this and like yeah make a release for this one anything else thanks so much talk about some of my work that I've done over the fellowship these four months I had my hands on a few different cookie jars here but what I started off with was EOF v1 and I was learning quite a bit about the implementation of EOF EIP5450 which was stack validation and the whole motivation behind EOF is that if we increase how we if we standardize data and our headers across the execution layer and how we handle contracts specifically within the EVM we can actually reduce the number of checks we have to use at one time so the EVM can run a little bit leaner and there's a bit of a fallout between Aqlo and Wreath and so while Aqlo got deprecated I wasn't able to translate directly into Wreath so I transitioned my project into Ether's RS so what I did with Ether's RS was actually combining event handling with write operations to the blockchain so the first thing that would happen is you would instantiate your contracts and your provider and then you have a trait so traits and rust are shared behaviors across different types and say you have the approval function from the RC 21 you would create a structure that effectively mirrors it there are some declarative macros you'll throw in and then you'll be able to throw in your contract instance your function selector the arguments and it will run you through so the first thing I do when I get into the function body is ask hey what block are we at and so then I'll be able to tell how many blocks have passed since I have made that original call in the middle I make the call to the function on chain using JSON RPC and lastly what I'll then do is make the separate call to the events try to get the event from the try and lastly I'll wait the six blocks to ensure that there are no short term reorts and that we have been successfully included into a block the next steps for this will be using wasm bind gem to be able to import this over to JavaScript and Python as well I've been talking to some of the maintainers of what would be wide for this on the other end of things I also wrote one of the first field improvement proposals in support of our bustling layer to ecosystem what I have proposed is that the big difference between field execution layer and the EVM is the UTXO set versus a non-spaced account model and so coins are added to the ledger in such a way that it's in the UTXO set and not in contract state storage and so what I want to try and do is enforce consistency between how we handle all sorts of assets on that ledger so my suggestion was we have a contract ID which we can use to derive asset IDs in the UTXO set and there was no way to select specific UTXOs I didn't want to have to use SAT selectors kind of like you're doing in ordinals might have been a little bit sloppy so I decided to take some inspiration from how Cardano took an approach to including NFTs in the UTXO set and given the field's unique architecture what I ended up doing is we can apply a bitwise operation to the contract ID and then we can use that to derive a unique ID to interact with the individual UTXO being produced from that set and so that is the specification for that and my main reasoning behind even implementing this in the first place is that it's very nice to have standard behavior so that both developers working on the application layer and the developers working on the protocol layer have high levels of consistency and that behavior is not broken between similar types of actions you can take on chain the last thing I've really done around here was some research on vertical trees and the implementation of vector commitments and so my idea in the research on continuing with is separating out the individual vector commitment from the different polynomial commitments so the motivation behind this is if we need to use a vector commitment that's binding if we separate out the polynomial commitments we can actually batch them together and as a property of the type of polynomial commitment we use if it's hiding we actually don't need to enforce that property of the level of the vector commitment so our individual polynomial commitments are just commitments about data but the vector adds the additional layer in terms of position-wise commitment about that individual data so I think we can knock off some individual terms for the vector commitment if we are able to successfully batch together all these individual polynomial commitments or if we are able to take on what are they so a lot of what it was to do with a statement happens in this tutorial and test their ideas and don't have to worry about the other things to be able to take them together and give them the power to have it created we want to know how to make sense of the segmentation so we can do three which is two numbers of hundred and two numbers of states we can have them together and share it with the individual and then take it together so first of all we have tried to make a couple of projects out of the paper but by the way we can go over and I think we will try to take it together to take it together and share it with the individual and the other thing is we want to try to make a couple of projects are in a kind of a kind of a kind of a kind of a kind of a kind of a kind of a kind of a kind of a kind of a So, in this case, I'm going to give you what it says. So, in this application, if you don't see that, you can take what's inside, so you can see the two areas of my phone, and also from my device. So, that even if one of the devices is compromised, I still have no time for that. So, with that, I can use another device. I can sign in the system with this, and say that I want to try to understand what this device has to be like, if one of the devices is compromised, or if one of the devices is compromised. Now, my device has to be connected, and it has to go to the birth stage, it can go and deploy it, so if you don't see that, it's going to be compromised. So, what is right now, though? Because, I don't think it's a good way, so you just can go to the attachment, and the device has to deploy it with the attachment. So, I don't think it's a good way. So, you just can go to the attachment, and the device has to deploy it with the attachment. Okay. How do you know what it can be like? With this, you can okay, You have to send a, you don't need this function actually, you just need one more function which is sign user or hash, that's it, most of the other functions are part of the, will inherit, this is where you will actually sign, so here for example, I'm signing, sign user or hash. So here for example, if you see I get a context from the content, for those who don't know in EAP 4.3.7, we don't have transactions, we have some for user operation because transactions have always have to be signed with an ECDSA scheme, but in user operation, you can have a custom signature scheme. So here I have my owner one, which is the laptop's owner said, store your, and I also get the message, sign message from the context, and this is nothing but that, that I signed in using my function, you will be able to have your wallet easily, the idea is again to promote account obstruction in hackathons, so you write your account in hackathons, so you have smart contracts ready and working. I am struggling with the fingerprint one, if someone is with me, please help me, that's what I'm doing, I don't think I have enough time to showcase the fingerprint one, but I'll be happy to showcase it later. Thank you. Thank you very much. And if you have any questions, feel free to ask here or online. Yeah, I hope you can hear me, I'm sorry about the fan. Yeah, you can hear it, right? This is the first time we've been releasing it, hopefully people will use it, but last year is to release it at the Denver hackathon. So Gaurav will be showcasing this and get some hackathons. Thank you. Any other questions? Thank you. Thank you so much.