 My name is Peter. I'm from the Eels team. Eels is an actor for Ethereum Execution Layer specifications. We're working on the future of how do you specify the execution layer. We've just heard about how the consensus layer do their specifications in Python. If you think what I'm talking about is similar, it's because that's where we got the idea from. Before I begin, I want to talk about what I mean by the execution layer. We're only here interested in the state transition function. You have a long chain of blocks. You have some state. You add in some new state. And the question is, firstly, is that block valid? Are you allowed to add it to the state? And secondly, what is the new state afterwards? We don't care about anything else. We don't care about networking. We don't care how you look at the orgs. We only care about what is the state transition function. So currently, let's suppose you're like someone you want to know about the state transition function, the execution layer, how it works. Where would you go for information? The first place you could go is the yellow paper. I have the yellow paper here. This is a lovely extract that explains what the head of validity conditions are. And some people think this is readable. They are in the minority. If you're curious about the, like, weird mountains, they're not mountains. That's how mathematicians write and. The other problem with the yellow paper is that it's actually really out of date. This is the Berlin version. The reason it's the Berlin version is because no one's implemented London yet. Secondly, you can look at EIPs. EIPs are individual slices of, like, changes. They're specific change proposals. And they're like, you can look at them and see the history. But they're not, they don't tell you about the relationship. They only tell you about individual changes in isolation. So if you, like, for example, if you read, you can read EIPs where it's like, well, this thing says this, and this thing says this. But how do those two things interact? And you just can't get that from the EIPs. At that point, you can then go and go, well, I'll look at the test suite. And the test suite is great. It is full of loads and loads of tests. But ultimately, it's just a pile of tests. It's not particularly well-organised. And if you want to find some behaviour, like maybe you can find somewhere in the test suite where that behaviour is tested, but it's not specifying anything. And then you can just give up and look at the guess or sketch. Fundamentally, if you've, like, got to the point where you're having to read the client source code to work out how the execution layer works, we have failed at specifying it. These clients, they are big, like, complicated, high-performance pieces of code. They're not, like, designed for the reader. The other thing about specifications is they need to be part of a standard process. You can't just, like, say, well, we're going to implement all this stuff and then we're going to update the specifications later. This is the big problem with the yellow paper, is that it's not part of it. No-one, like, proposes a change by saying, this is how I would change the yellow paper. And so the yellow paper just gets updated later on as an afterthought. The other problem is that the yellow paper is just like a document. It's not executable. You can't test whether it works. And this applies to EIPs as well. So this here is a sample from the EIP that changes how the cost of the mod X for your compile works. It was proposed. It was accepted. And you're like, this is nice. There's a nice piece of easily readable Python that says exactly what it does. The problem is this Python is wrong. No-one has ever executed this Python. Because if they had, they would discover that it doesn't really make any sense. And, I mean, as Donald Coo said, beware bugs in the above code. I have only proved it correct. I haven't executed it. So this is our approach. We want to start with, like, a code-first approach to specifications. We take, like, Python. And we're not just writing any old Python. We're, like, trying to create the Python that if you opened, like, some programming book and someone had an example algorithm, it's that Python. So we don't use classes. It's just methods and effectively structs. This is, like, the common language. All programmers speak this. Anyone who's done any programming will immediately know what this is. We don't need to, like, understand fewer mathematics like you do with the yellow paper. And, crucially, it can be executed. We can test it. If you want to know what Python does, you can run it. If you want to compare it to a client, you can run this. You can run the client. Do they do the same thing? We're purely here interested in readability. We don't care about performance. It's extremely slow. It can think mainly it. I think it takes about, probably take about six to nine months to start from Genesis and get to the head of the chain. It's not a viable client. We also make some very weird, like, design choices. We look at hard forks in isolation. So if you have various, if you go, you'll see it's ethereum slash, slash ethereum slash frontier and slash homestead. And each of those is an independent implementation of those hard forks. So this is, like, terrible for, like, code duplication. Like, literally every time we implement a new hard fork, there's, like, this massive coughing of, like, thousands of lines of code. But it's great if you're, like, a reader. If you want to read it, you know, you can see, like, how homestead works, completely in isolation. You don't have to think about how it relates to other things. Whereas if you read a real client, it's just a pie of, like, if we're later than a spurious dragon, do this thing, otherwise do this thing. If you do want to see a comparison, so we're developing specialist diff tools. And those diff tools allow you to compare, like, this is exactly what happened. This is the specification for frontier. This is the specification for homestead. This is what's changed exactly, line by line, in code that we have tested. So here's a sample. This is the S-load opcode. I'll give you, like, a feel. A number of things here. First, you can immediately see we've really tried to make it as readable as possible. We've also divided it up into these, like, individual sections, which we've given headers. This actually, like, matters. Cymaintics that the EVM has that relates from the fact that gas calculations are done before any computation is done. And you can get very confused about some of the subtleties of, particularly the call opcodes, if you don't realise this. Now I want to move on to, like, now we have these things. How can that, like, affect the development process? How does having these specifications make improving and building on the EVM better? And I think it's helpful here to think about two signs of development process. On one hand, you have, like, your R&D people think On dwi ddim, rydyn ni'n bwysig, ond rydyn ni'n gweithio'r pwysig Peta, wrth gwrs y gallu ddechrau'r gaf. Rydyn ni'n bwysig. Rydyn ni'n gweithio'r pwysig. Rydyn ni'n bwysig i ddweud o symrhaf python, rydyn ni'n gweithio'r performanc, rydyn ni'n gweithio'r pwysig i dda, i ddim wedi ddim yn ei ddweud. On dwi'n gweithio, rydyn ni'n ddechrau'r pwysig Yn ymlaen, mae'r system yn ei adrodd yn ôl i'r ddweud â'r ddweud ar gyfer y cyfnod, ac mae'r ddweud sydd ymlaen yn ddefnyddio'r ddweud, ac mae'n ddigwydd i'r ddweud i'r ddweud eich ddweud o'r ddweud o syniad rhai cyfnod o'r ddweud i'r ddweudio'r cyfnod, oherwydd mae'r ddweud i'w ddweud i'r ddweud i'r ddweud, ac Eels yn yr ysgolion cyfnod. y meddwl, dyna eich meddwl y gweld ymlaen. Ym eigylchedd. Mae hyn sy'n gynest i chai gwahanol. Yma, yma, y bydwch gyda'u gwahanol, mae hynny'n gwybod iawn i hyn. Mae hynny'n yn gydnos a gael computer, ac mae hwn yn y caser ymlaen, ac mae hwn yn y bobl cymunedle. Mae yma, mae hwn yn y Ymlaen, mae hwn yn cyfnodol am eu gain sy'n god complaints. Ymgylchedd. Mae hynny'n gyd,團 o bob, mae hwn yn ymarferwch, mae hwn yn ymlaen, yn gael unigol, mae'n cychwyn i'r unrhyw, mae'n meddwl i'r unrhyw i'r unrhyw, a mae yna bethau o'r proces ymddangos i'r unrhyw o'r gweithgau, mae'n dod yn cwrs, ond ydych chi ymddangos yn cwrs yn ymddangos, mae'n ddigon i'r gwaith cyd ord-degydd, a dweud i'r bobl yn gweithio'r arfer y pwrdd. A cymdeithasol, ond mae'n gweithgau'r gwahanol, mae'n gweithgau'r gwahanol, ond mae'n hefyd o'r spec. i'r hunud o'r rhan o'r cyfnod, sy'n gwasanaeth sgwydiannol yng Ngharliwyd yn dweud. Mae'r ddiweddyn nhw yw'r pethau a chyfyddiadau, sy'n gallu y bydd ymgyrchaf y ffordd o'r cyfnodau sy'n gwneud. Yn hyffordd o'r cyfrifoedd y gwirdd o'r cyfrifodau sy'n gyfrifodau sy'n gwirdd o'r cyfrifodau sy'n gofynol â'r cyfrifodau ar y cyfrifodau. Felly mae'r ffordd o'r cyfrifodau sy'n gweithio'r cyfrifodau eich cyfnod o bwysig o gweithio atllordi i ddeallu gweithio'r dweud, ac mae'r hunig o gweithio arferwadnau i gweithio gogen. Ond oedd chi wedi bod yna ydych chi eich dderbyn o ddweud o dweud ac roedd eich ddeallu am y gweithio o gan eich cyffredin o'r holl. Amgylchedd o'r waith yma i'r ddechrau a'r ddechrau. Cretych i chi fod eich ei ei ddechrau. Mae'r ddechrau bywyd yn ymwybod yma, Currently you have this situation where like the implementers and the R&D people like get to like interact much more tightly. If you have this like Eals process, you can like separate that out and do these in parallel so that you can like have the implementers phirling up one heart at fork while Eals are like getting ready to prepare the next hard fork. Whereas if you're like busy modeling things in geth and the implementers are also tidured to do in the R&D you just don't get the same parallel efficiency. I just now talked a bit about where we're at We have implemented all the hard forks, the merge is still a PR, but we're nearly there. We're going to need to do a bunch of refactoring, and then we need to freeze the code because we're asking people to build it. We're saying if you want to take an EIP and you want to build on top of that, change it, you have to take up our latest fork, copy it, implement your own changes, and we can't ask people to do that sort of thing until we freeze the code and say if we're not going to refactor it under you and we're not quite there yet. Once we've done that, we're planning to shadow the current governance process for Shanghai, so Shanghai is going to go through in exactly the way that all previous hard forks have gone through, but we're going to also be working with a whole bunch of people to try and implement those changes as EELS proposals, and then we're going to see how the governance process works, and then hopefully we can improve on that and talk about moving away from some of these legacy approaches to specifying execution there. Finally, there's the question of how you can help. First, if you actually need to help yet, we're still coding, we need to finish that. Once we've done that, we ask you to implement your favourite EIP and give us feedback. How was it? Because we don't want this to be a world where we as the EELS team can develop what we want, develop things and we can propose changes, but no one else understands it. I don't want to be in the situation that you're currently in with the EIP. We're actually only a really quite small minority of people know how to write tech and how to change the EIP, so I want this to be like an open process where anyone who's like, oh, I want to propose something to just clone our repository, run the new fork tool and implement their change and produce a proposal without being burdened by having to think about how the GFDB works or whatever or wherever else they're going to do their modelling. Yeah, that's the end of my talk. We have plenty of time, so did people have any questions? State growth, so if I make a commit to the execution spec and I want to see how that influences state growth over time and the capability to run a node and things like that, have those sort of things, those mutations been thought about from a, you know, metrics gathering level? I mean this is the thing that like people who implement this like think about this all the time, like I mean yes because we hold the whole state you can like model that sort of thing, I think a lot of that is mostly about gas pricing and gas accounting and like there are advantages here, so for example there was a proposal about how withdrawals work, because I have the execution specs I could like look up the table of gas prices in the execution spec and say well this is how much this sort of thing should cost and like yeah but I don't think our tool is like particularly useful for like modeling state growth particularly, you just need to understand like what actions can cause state growth and how much gas those could those cost. How is this going to interact with the reference tests that the eels, there's a lot of crazy edge cases in reference tests such as integer boundaries and other strange things? Oh yeah I mean I've had a lot of fun with crazy edge cases, I should do a talk on crazy edge cases once at some point they are amazing. I mean firstly we run all of the reference tests so if you go in our test suite you type TOX, the automatic test suite will run all of the reference tests and so currently we pass every single one of them and I have like learnt an awful lot about the crazy edge cases of the EVM. We actually hope to go firm on that, we want to be the chemical builder of the execution test so currently for those who are not familiar we have test fillers which are like, they're not like complete tests, they say here is what the pre-state's like, here are some assertions about what it should be like per state. Then you run, you take a client, it's usually death and you run a filling process that gives you a completed reference test and that reference test and it tells you what the state which is and it tells you basically exactly what the block should be after you've formed an execution. We will think that we should be like the chemical filler of those tests, we can fill them before the execution clients have even implemented those tests and then we can, if you want to, and that means that clients can use the test to build their clients whereas currently until you've implemented something you can't fill the reference tests and you have to implement it at production client. There is also and I haven't thought about it in great detail the question of whether we should use the, whether we can like analyse the structure of the evils to like generate new reference tests and basically do some sort of path analysis and say well these are the possible paths through this sort of code base, can we follow every possible path and make sure there's a reference test that follows every possible path but that's like R&D that we like haven't done yet. You mentioned you implemented, no you care only about state transition function and that means you implemented your own EVM and you can fill the test and then you provide the post state hash. If you have a chain and you have a block and your proposal is to add that block to the end of that chain and update the state, we could tell you, we are test, specify our things, specify exactly whether that block is valid. Are you allowed to do that because something particularly valid tend to be invalid in that case you're just not allowed to do it and if it is valid it tells you what the state changes are and then you can do the next thing. We don't do anything else, that's the only question we answer. So this is obviously inspired by the consensus specs or the pi spec. Are there any particular architecture changes or learnings you've gotten from the pi specs? Oh yeah, that's a really good question. Okay so there's a couple of different tiers here. The first one is that the pi spec, the consensus pi spec is a document that can be, is a bunch of documentation that can be compiled to code. So if you look at the pi spec it's a bunch of MD files with some code in then there's a compilation process that spits out code. We are a bunch of code that is, you can then compile using standard documentation tools to documentation. The second big change, if the pi spec does this like incremental building stage where in the pi spec you have phase zero and then you have Altair and the Alpair documentation just adds a few changes to the phase zero whereas what we did was we copied the entire thing and create a new version that contains everything, which means because otherwise what would happen is if you wanted to know how the merge works, in Paris works, you'd have to read Paris which would say these are the changes versus London and then London would say well these are the changes versus Berlin then Berlin would say well these are the changes versus Istanbul and it would just keep getting on and on and so we don't think that's a particularly good way especially because a lot of the historical details are really quite legacy and not things that people need to be concerned with nowadays because they've been changed out and so we just like do isolate, they're completely isolated files. This calls like a whole bunch of code duplication issues and complexity but it does mean it's like really simple, you can read a particular fork completely in isolation from any other fork whereas if you can't do that with the consensus specs. I was wondering for the executable specs how do you think it might change and improve like the EIP process of Ethereum? There was lots of conversation in terms of governance of how when you're proposing an EIP maybe this should be like a requirement to kind of speed up the testing process but would love to hear your thoughts on how it could be intercreated and what parts of it it could improve to change that EIP flow. So there are some people who think that we should get rid of the EIP process for the execution layer. We should abolish EIPs for the execution layer and instead you should have whatever EALS proposals are called and it should just be. If you want to change the execution layer you write an EALS proposal, you might write a small amount of and theory documentation but most of you just write a PR. There are other people who think that we should do a like hybrid process where there is an EIP and the EIP comes with as like a mandatory component an EALS change. I actually don't have a particularly strong opinion of this, I can see the argument for both sides. Yeah but there's lots of force on this. I generally take the attitude that I'm going to wait until like we've started using this and people have like seen it in production and then they can start talking about what they actually want to do.