 What this is is a community discussion. And we have people who are core devs. We have cat herders here. We have all kinds of people who are from different organizations who bring different skills to this. And what we're going to talk about is Ethereum 1x. And so this is an open format. This is a fishbowl format. And so what that means is people from the audience can join us. And we have this chair right here and this chair right there. We're going to try to keep open. So if you come to the front and there's someone there, you can kick them out if you would like to join. But I expect this to fill in more as more and more people want to participate. If you want to keep participating, you can move into one of these chairs and just stay. Maybe you won't get kicked out. But I think anyone could be kicked out. So sorry. But yeah, just keep in mind you're free to participate. So we're going to have, I guess, we'll try to keep a mic over here if anyone wants to come. And we'll put a mic right here. And so I guess we'll get started. The key to this is we're going to start with what does the community want to talk about? What do developers want to talk about? What do core developers want to talk about? And then I can just pass the microphone onto you all. And then maybe you guys can get started with the conversation. Sounds great. Should we do intros with some of the people up here who are core devs and stuff just so people are familiar? Well, everyone in the circle gets an intro. Oh, perfect. Yes. If you're going to sit in the circle, then introduce yourself. OK. So let's kick it off. I'm Jamie Pitts. I work with magicians. I also work at the Ethereum Foundation. That's a net. Back there, why don't you introduce yourself? Hi, guys. I'm Annette, and I'm working with Ethereum magicians. And I'm helping out with operations and organizing events and making sure that all those people and all those important discussions will happen. Cool. So we'll leave it to you. Hudson, you want to introduce yourself? So my name is Alexei Ahunov. I've been working on Turbogest client, which is derivative of Go Ethereum. And also this year, I've also been involved into research and development into state-trend and stateless clients, which are all these things I'm still doing. Hey, I'm Hudson Jamison. I work with the Ethereum Foundation and with the Ethereum catherters. I also am one of the organizers of the bi-weekly core developer meetings, along with a few other people who we all help out. I'm not a core dev, but I try to keep up with the tech so that I can facilitate the meetings accurately. I've been doing that for the past three years roughly. So I know a lot about the process, EIPs, the hard fork decisions, things like that. So that's how I feel like I will contribute today. I also have a talk tomorrow on the main stage right after Vitalik. I think it's at 10 on ETH1X. So if you want to see a more formal presentation that I have slides for and stuff, feel free to come to that tomorrow. That's the end of my shilling. Hey, I'm Tim Beko. I work with the Pegasus team on the Hyperledger Basin client. I'm a product manager, not a core dev, but I do attend the calls to try to wrap my head around the process and the EIPs. And I'm also part of the catherters. I'm Rai, also at Pegasus, working on Hyperledger Basu. Yeah. Hi, I'm Martin Svende, and I work for the Ethereum Foundation with security issues, Ethereum Infrastructure Mainnet. And also, I'm one of the coders on Go Ethereum. Hey, so I'm Peter, and I'm currently the lead dev of Go Ethereum. That's about it. Who's this guy? I'm Vitalik. I advised a couple ICO projects. You did Bitcoin magazine before. Hello. Technically, I'm not the part of the core team. But since I'm sitting here, I'm doing the notes and the clients with the nation state cryptography for the Russian central bank. Hey, guys, my name is Cody Born. I'm a developer on the Azure blockchain team at Microsoft. Hi, I'm Lukas. I'm just a random software developer who worked on DApps. And I'm a bit humbled sitting in this magic round here. OK, so what do you guys want to talk about? What do you all want to talk about? If there's anybody who has a topic that relates to Ethereum 1X, please come forward with it. Do we have something? OK, come on down. Come on down. I also have a topic. OK, how about you first? I'm worried about the scalability of Ethereum 1. So we can't even reprice opcodes without breaking anything. I mean, we had this discussion with the 1283, I think it was the problem. And now we have the same problem again with the 1884. So as a developer, I would be really interested in seeing some invariance that I can work with so that my code that I write and deploy on the Ethereum main net doesn't break in the future. And I mean, in the case of the 1283, the developers weren't following best practices. But now in this 1884 case, yeah, people wrote code. They followed every guideline that was there. And now the code breaks. So yeah, I can kind of give a quick intro to this before we have some comment from some of the other developers. But we had an EIP. And basically, in order to make the blockchain a bit safer to prevent from, and please correct me because I'm going to butcher this, from like DDOS attacks, we repriced an opcode. And in repricing that opcode, if you have a smart contract that relies on that opcode being a certain price, that contract may break. There was a bunch of contracts that have upgradeability. I believe Aragon was one of them. And those are going to be able to be upgraded. But it's a little bit annoying. There's other contracts that may have the effect of having funds locked in them. That's a little bit scary of a term. So what we did is, in order to prevent this from being another scenario where it's like, we can't get our funds out, things like that, what we determined was, if people come to us after we implement the CIP, and they say something like, hey, this is messing my contract up, we're going to research and come up with a solution to work around so that they can have their contracts work again and those limited amount of cases. Did I get that almost right? OK. Yeah, Martin can also comment a little bit more on that. Yeah, you brought up two cases, 1283 and 1884. And they're kind of different, because 1283 was made to improve the life of deaf developers, making it cheaper. Whereas 1884 has been added as basically a security precaution to rebalance the op codes. And the former one, we skipped that, so as not to break stuff. It's being implemented now again in the form of EAP2200, but with minor modifications so that it won't start introducing new vulnerabilities into contracts. And in general, the idea that has been floating around a lot is that if we do changes to the EVM, we will also do versioning so that these new features will be opt in. However, it really kind of makes no sense to have opt in features for security features where we want to protect the base layer. So thus, we're rolling out 1884. And the thing is this will affect a lot of contracts, but in the large majority of cases, it will only affect them a little bit. Like most use cases that are affected can be solved by adding a bit more gas at the beginning. And I think that would cover like 95% of all the cases where it's going to be affected. But then there are some particular flows where it's already encoded that it's an automated transfer into something else. And that flow between two contracts can thus be broken in a way that is not fixable, I mean not fixable by the initiator of the transfer or the operation. And so that much has been, we know that this might break things. The question is how hard will it break and how much pain will it bring? Because I think that in many cases, contracts are actually upgradable. And if you have a set of contracts which suffer from this, you can actually redeploy it and fix it that way, which further brings down the number of actual broken flows. And if there, if it can be evidenced later on that hey, we have actually these particular instances which are broken, cannot be upgraded, and cannot be salvaged in any way, then we can look at okay, is what fixes are needed to solve these instances? And there are a couple of ways that may help in most of them. I think for example, if we lower the cost of a log operation, that will solve a lot of the issues. But it remains to be seen exactly what kind of patches will be needed for the, yeah, to get the maximum coverage of the rescuing or what do you wanna call it? So maybe just a step back, I thought it might be useful to just offer an introduction as to like the specific gas cost increase and some of the security background around it. So it's a kind of general fact in Ethereum. So in Ethereum, we have different kinds of op codes. We have op codes that do computation, add, divide, subtract, elliptic curve pairing and so forth. We have data and we have op codes that do disk IO. So basically op codes that read from contract balances, read from contract state. So read things that require basically accessing the disk. And it's been a general fact of Ethereum pretty much since the beginning that the gas costs that we set for those op codes are for multiple reasons far lower than they should be. So one of those reasons is that the op, like relative to their gas costs, the op codes actually take a fairly long time to process because accessing disk is pretty expensive and takes a long time. And so for example, there was this recent paper that suggested that on their own hardware, a worst-case DOS block would take up to 80 seconds to process. And making a worst-case DOS block is hard because you have to outbid like literally all the other users, but it's security is still lower than we would want it to be, especially given the people's desire for more scalability in the medium term. And so from a security point of view, it's pretty much required to increase the gas costs of checking contract balances, reading contract code, reading contract storage. My own personal opinion is that the IPs that we have don't go nearly far enough. But the problem is that this requires increasing some gas costs and this basically breaks some contracts that relied on the assumption that there is a fixed amount of gas within which they could do some things. So I think I've written this opinion somewhere, but from my point of view, the reason why the increases of the gas cost break in things is because the gas is used in two different functions. So the first function of the gas is what Vitalik was talking about is to basically measure the impact on the system, the performance impact and charge people for the same transaction to compensate for that and discourage the abuse of the system. But there's a second function, which also gas plays, which is the restriction of the things like recursion and the callbacks. And that second function has been on the rise since 2016 when the, especially after the re-entrancy, the re-entrancy problems have been kind of brought into light and people are trying to use the second function of the gas much more to, so they would allocate a very limited amount of gas for a lot of operations to make sure that the recursion doesn't go deep or the callbacks can't do things. But we now see that these two functions are becoming at odds with each other. While improving one, we actually hurt in the other. And from my point of view, it is becoming worrying because that tells me that in the future where we need to do more of those things, adjusting the gas cost, and we have to assume that we should be doing it all the time anyway, because the hardware changes, everything changes. We have to, the way out of it is to split those two functions and then decouple them from each other. That does bring complexity because we essentially will have two gases now, two gas counters, one for the first function, one for the second function, and then you need to decide what is the gas limit then? Is it the gas limit for the first one or is it the gas limit for the second one? But I think it's solvable, and more importantly, if you decouple this, then you're kind of a bit more future-proof. So coming back to your original question, which was about like invariance as a app developer, like obviously we need to change the gas cost sometimes because security and a whole bunch of reasons, and some of which we don't know today. So I guess it's a question for people up here. Are there like actual invariance that you can give to app developers building an Ethereum, or is the sort of hard answer, things will change, they will break, and we'll try our best to make that the least inconvenient as possible, but it's kind of unrealistic if you're building an app to assume, there's any invariance. I think the truth is like somewhere in between, but I'm curious what people think about that. Yeah, I think that app developers should have the mindset that things may change, there are no invariance, and they should build their systems or deployments in ways that they can be upgraded because it's still developing. The EVM is still developing, that's my point of view. Just add to that. I think for example, another important thing that app developers should always keep in front of them is that every time you find a neat little trick that hey, I can store this data in a more optimal way, or sorry, in a way that kind of is a bit cheaper than storing it the other way around, our goal as maintainers of Ethereum is to make sure that everything is priced correctly, which kind of means that storing data should cost the actual resource. So if you figure out a cheaper way to store it, then there is a high chance that if it picks up steam, then it will get patched, simply because it's not balanced correctly. So our goal is always to balance the app goes to the resources actually being used, and yes, sometimes that means making it them cheaper, and sometimes it means making them more expensive, but all the loopholes will eventually be filled. So I think that's a good invariant. Can I ask a question unrelated? Can we move on from? Yeah, we'll watch another topic, go for it. Great. Hi, I'm Mariano from Maker, and I'd like to discuss a programmatic proof of work. Because I feel like there's a big divide between... Is it crying in the background? ...core devs and community in general, and I've seen a lot of prominent members of different projects and apps that feel strongly against it, and many core devs that feel strongly for it, and they're way around. And I don't think we've ever been closer to a potential hard fork since 2016, so I would like to see what you think about it. I think PraguePOW is really important as an issue that we need to decide. Not important in itself, necessarily, I am not taking a position, but I'll say that because of the politics around it, it makes it really challenging, because on one side you have the people like ASIC manufacturers and users and some investors and stuff like that who say we don't need it, the threat isn't big enough, or the people who made it are unknown and that's scary and things like that. And then on the other side, you have the GPU miners in the mining pool saying we need this, otherwise we're gonna go under and you're going against the promise that you made in the whitepaper, according to them, that there would not be ASIC resistance within Ethereum. And then there's many, many other arguments. I'm actually post DevCon going to make a large blog post, Reddit post, et cetera with everybody's arguments and counter arguments so the community can get a better idea of where everybody stands on that from each side. That's on my to-do list and it's nearing the front of it so that'll happen hopefully before November. And other than that, I do wanna kind of hear the perspective of some of the core devs because you're right, there are core devs who are supporting PraguePOW from their perspective, whether it's technical, political or both. And that's another divide that's very naturally occurring is you have this technical perspective where we had two audits done, a hardware and a software audit that basically cleared PraguePOW as something that wasn't super fishy. And then you have the technical perspective of it's good, we've implemented it, it'll take two to four weeks for the other clients to implement it, we're good to go. And then you have the political one where it's like, I hate Christie, Christie's bad, Christie lied, all this other stuff with core scientific and the people who are anonymously developing it. So long story short, a lot of politics involved, there's technical versus non-technical arguments and there is a division that needs to be figured out in a rough consensus style way. Yeah, personally, I kind of cooled down quite a lot on this particular issue for the last few months. I did see a lot of discussions, a lot of people getting, I hope they're getting more informed about this. So, and I really welcomed the audits that happened. I read through the audit report. Yeah, I mean, I sort of had my opinions, I put them out there and I'm happy that people were listening and there was discussion going on. And at this moment I'm, I'll be okay if it gets implemented, rolled out, and I'm gonna be okay if it doesn't. So I'm not really going to worry about this. So, if I'm basically, something that I said before, if I turned out to be wrong, that's fine. You know, I just kind of moved on. I think it's made a good progress recently. Yeah, so I'm one of the core developers who are pro, progpow, and I've written down my reasons why. I believe progpow is the right way forward. The way I see it, I think there's a very loud, what I perceive as a very loud minority who is extremely loud and spreading a lot of fear and uncertain in doubt about it. And in my view, there has been signal and taken from the community. There's been coin votes, there's been mining votes and et cetera, et cetera. So I feel it has been kind of, there has been shown to have a great support in the community. And I don't think the situation has changed from six months ago, except that at this point in time, there has been a lot more, yeah, a lot more churn from the political side and the fun side. Yeah, and I, from the political side, I would say that one area that myself, that I fault myself on and others is there wasn't enough communication. People like to be heard, and this kind of got sprung on people, some people, because who's gonna read a transcript of the most boring core dev meetings every other week? I mean, it's not boring to everybody, but there's a lot of very deep technical topics and the average person isn't gonna realize that. So progpow got sprung on a lot of people and that was a communication error. And so because of that, now people are upset because they felt like they're not being heard now, they weren't being heard then, when they felt like they didn't like progpow and they thought it was dead because it wasn't getting implemented as quickly, because this has been on the radar for how many, since last, no, before last DevCon. So since March of 2018, progpow has been starting to be implemented, talked about, et cetera. So it's been a long time. So I think if we can have people be heard, that's going to heal a lot of the wounds and start to get us more toward a rough consensus. And I welcome people from the audience. If you have an opinion on this, you can go sit in this chair and give your opinion or whatnot. And otherwise, anyone else, we can go to another topic. One thing that I'd like to point out is. Intro. My name is Dan O'Farran, I'm picking my son up from school. One of the things that I've noticed within the past six to eight months with this is there's been a notable drop in civility in the arguments relating to progpow. And the problem with this drop in civility is people start closing their ears. And there are principled people with real solid objections to it who get lumped together with some of these people spouting these crazy things. And that's doing a disservice to the dialogue, to be uncivil and to be rude and to be crude to everyone and to send all sorts of weird, veiled threats to people. That's just not cool. I agree. Yeah, I think I want to echo Dano's statement. I'm Adrian Sutton, I work on Basu. I think one of the things that's very dangerous in this for the whole Ethereum community is that if we set a precedent where you can shoot down something by attacking someone proposing it personally, then we're gonna see a lot more of that in the future. So I'm really keen to hear technical concerns. And that includes some politics of do we actually want to be ASIC friendly? Is that that direction we want to go? Those arguments I'm all for, I really want to hear. But when we start diving into this person was associated with that or they're corrupt or they're and not actually being able to point to this is what they're trying to do. So if you've got a problem with Prog Power and you think it's all a great scam, then great, let's hear exactly what the scam is. Not just a vague thing where we're slandering personalities because that leads to more and more attacking people, which makes it incredibly difficult for people to stand up and make proposals and be involved in the community. We've lost people in the community who are doing great work because they've been attacked personally in the past. We don't want to see that happening in the future. Just to add my two cents, this whole Prog Power versus CTH discussion. I think one of the, so currently my feeling is that the whole discussion boils around whether we want to be ASIC resistant or not. And I think this is something that kind of just is tearing the whole community apart. And from my personal perspective, I don't really see what's the whole point of it. So currently we have a few big mining pools. Now if we have an ASIC friendly mining algorithm, then probably we will have some manufacturers that make a killing out of it and some mining pools. Now if we have an ASIC resistant mining algorithm, probably there will be some different mining pools and different hardware vendors making a killing out of it. But essentially, from my perspective, currently we're just trying to decide who we want to give our money to for mining or for creating that hardware. Now if, on the other hand, we can actually technically say that one of them is dangerous or one of them, or we can somehow prove that yes, ETH hash is a security issue and we can prove that publicly, then all of a sudden probably the entire community would shift towards the other one. So if there's a threat to the network, I think then the whole discussion is decided. And if there's no threat to the network, then it's kind of just a heated political thing of who gets to make more money out of it. And that's not going too far too fast. All right, do we have a next topic or somebody wants to discuss something? Yeah, come on up. You can come on up before we're done talking about the next topic, by the way, and just grab a mic so we know the next one's ready. All right, hey, I'm Hernando. What's the deal? What state run? Where are we at with that? It's canceled. No, actually it's more, the question is the answer is more complicated than that. So our current effort is to prototype and specify, so that's my main effort and my little team. So to prototype and specify the first viable version of the status clients, what's this? Oh, yeah. The most assertive of the status clients. So it might be the good thing to put out there is the data from the status client prototype. If that would be the proposed to, my misspelling, yeah, that one. That's probably like the closest one you can, if you scroll down to the there, it describes a slightly the idea of it, that go down a little bit. It's very old post, but it's still relevant and it describes what we kind of want to do. And the, yeah, further down to the graph, to the next one, break down, yeah, further one, further, further, okay, that one. So, yeah, so this is the, I've done some more analysis on this, but basically what we're trying to do is that, we kind of try to circumvent rent a bit, because as I said in April in our meeting in Berlin, after researching rent for a bit for a few months, I've realized that it is possible to do it technically, but it will be very expensive project to roll out practically. And the reason why it would be very expensive to roll out right now is the cost of ecosystem research, which had to be done, because the state rent wherever you implemented, it will change the programming paradigm in the way that, at least in the way I thought about it, because other proposals where you have kind of attenuated cost of the storage, which I don't call state rent, because it doesn't actually remove things from the storage. So, any proposals, any shape or form which will start removing things from the state after the rent is not paid, will inevitably change the program in paradigm, and some of the things will become kind of unexpected. For example, I'll give you example now. We have a smart contract in Solidity, and it has a variable, filled variable in it, and you initialize it in constructor, and that variable essentially like, let's say, pointer to another contract. You initialize it in contract, and it's a non-null, it reference to another contract, and you can happily use it within the contract anytime. And you assume that it will never go to, it will never disappear. So you don't have to check in every function of the contract that that pointer is still valid, that the contract that you point is still there, because you assume that, okay, I initialize in constructor, it will not go away. It's like in kind of const variable in C++, right? It will be there. It's once you initialize it, it's never changes. But then, with the state rent, you might see that this thing can just go under, you know, it will go under the pointer. So the contract will disappear, and then your pointer will point to something that doesn't exist anymore. So that sort of, lots of these things have to be changed in terms of the paradigm. So, and obviously there will be some contracts that will be suspect, susceptible to some griffin attacks that need to be researched as well. And obviously, the number, amount of resources that I thought would be required for that, if we really wanted to roll it out, would be something that I cannot afford, or maybe I'm not sure if your foundation would be able to afford that or something like that. So that's why we started to do more emphasis on status clients, which might actually, if implemented correctly, and if my sort of intuition is correct, it might actually give us enough runway before we, and then I've got another proposal, but which I haven't published yet, that actually introduces state rent after the status client, but in a slightly different and more surprising way, but I don't think I have time to explain it yet. Can you really answer the question? Is there a way to get it back? So basically it's postponed at the moment until we do the stateless clients. Okay, so stateless clients are the way to go until E2.0, that's what's gonna hold us over? Hopefully, because I don't know, because obviously the future is hard to predict. This is our current, I mean, the title comes to say something. So another reason why it's good to go this way is because by far the easiest way to kind of merge ETH1 into Ethereum 2.0 is to basically turn it into a stateless client by default because that's just the way that state inside of ETH2 works. So it actually kind of makes the road maps fit together very smoothly. So the downside of the status client related to the state rent is that there will have to be nodes in network that still hold the entire state, but this requirement will not be for every basically full node. So there will be still somebody has to have all the entire state. But we hope that this is, as long as not everybody needs to do that, it will be better. So one kind of hard thing about stateless clients that I think it's important to be very transparent and clear about is that making stateless clients work well ultimately will require basically the same kinds of kind of gas cost sacrifices that we talked about in the first question, but probably to an even greater scale, right? So for example, the witness size, so the size of like extramurcal data that you would need to pass to verify a worst case block right now is about 330 megabytes. And that's basically because you have contract calling that 700 gas, 24,000 gas for a contract and then add 4,000 bytes for a witness, 28,000 multiplied by 1,400 and like you got your hundreds of megabytes right there. So, and then there's kind of smaller versions of this attack with things like the balance opcode SLO and all of those things. So I think we will ultimately needs to have changes that say like one, the gas cost for these storage accessing opcodes go up more. Two is that the gas cost for accessing contracts that have a large storage, like what needs to go up even further, right? So for example, charge one gas per byte of code that you read and if we want to make this nicer, one thing we could do for example is we could waive the extra fee if the contract was accessed say within the last 10,000 blocks. And that would reflect the kind of the actual load of kind of quote almost stateless clients that keep stayed around for a little bit of time. So like those kinds of like that kind of rebalancing would need to be thought about as well. So basically kind of for multiple reasons like developers should expect that IEO is a likely to get a storage reading account reading cross-contract calling is likely to get more expensive relative to other things. Just to pause this topic for a second I wanted to introduce someone who just got here, Felix. Felix take the mic and just give a quick introduction. He works on some of the networking stack. Yeah, I'm Felix. I work on gas on the networking. I just got here from another, I missed the beginning so I don't really know what you guys talked about. But here it's mostly been about stateless clients which add also huge challenges on the networking side. So I feel like the stateless clients is basically just as unsolved as like any other problem. Like there has been progress but it's not really something that we can honestly say like it's easy to pull off. Yeah, I would use the same analogy as for quantum computers. I've heard like two, I've heard about two years ago that in terms of quantum computers all the fundamental challenges have been solved. Now it's the matter of engineering to build one. I mean, there is no real kind of super complex stuff in stateless clients that the idea is super simple. Yes, there will be more shuffling data around the network. There will be some issues with that but I don't see them as being fundamental. Can I interrupt? So I think there are really fundamental issues there. So the problem is that the moment, so currently transactions are really tiny. The moment you start entering into territories of megabytes and we're not talking about hundreds of megabytes, just megabytes, the whole networking layer needs to be gutted out and retrofitted so you can chunk up messages into smaller pieces. That's a lot of engineering effort, maybe not research effort. Yeah, let me just finish. However, even if that is solved and the network data shuffling is solved, then if you, so all of a sudden you also need to store these witnesses beside the transaction so you implode or explode, sorry, the data usage of the immutable side of the chain. And we can say that yeah, but that's cheap to store. We can put it on an HDD, sure, but we're already at 100 gigabytes or 150 gigs of tiny transactions. Now imagine that all of us in every transaction would be twice or not twice, 10 times the size and maybe instead of 150 gigs we would have two terabytes of immutable chain. In Archive Node, all right, basically is the information you need to give us the most client-fulsyncing data. Yes, he said that. I was just saying, like, in Archive Node, basically is the node type that has all of the data with the ability to produce it on-demand for any transaction. Yeah, but essentially without an Archive Node you would not, so are you, I don't need the Archive Node or you need the chain of witnesses? So one thing we can agree on with status clients is that basically it changes the user experience a lot where previously handing the state was kind of Ethereum's problem, with status clients it's a user problem? No, no, no, that's not true. In a way? Because I think you might be confusing it with what I call status contracts. So the status contract is the idea, it's actually the, it's a design pattern whereby you start, instead of storing your data inside the contract storage, you store some sort of miracle route and then you start passing the, like you pretend that the data is somewhere off-chain, well you don't pretend that there is off-chain, but whenever you do something with the data you start providing the miracle proofs and this proves to actually inside the transaction and that increases the size of the transaction and so on. The status client actually, it's not like that, it moves the burden into the actual layer, in the protocol layer, it's not the job of the smart contract developer will not change and the user will have exactly the same experience apart from paying more for the state access. So as Vitalik mentioned, the call, op-codes will be more expensive because you have to load the code, the S-store, S-loads and S-stores will be more expensive, balance reading will be more expensive, anything that touches the state will be more expensive to pay for the witnesses. Transaction format will not change. Essentially there will be more data shuffling around network but not as transactions as the basically witnesses and of course the witnesses will chop them up in parts so you don't have to ensure the better propagation. So that is also a kind of pretty solvable problem but yeah, it's not... Okay, well so in that case that's like, it's like halfway solution basically, like you do still want to have state, it's not like stateless as in Ethereum itself doesn't have any state, it's more about... So I would say the stateless client is basically approach of instead of asking all the developers to follow certain design pattern, you just basically solve it for them in a generic way. So if you have a stateless client, instead of telling everybody to put Merkle route in the contract, you just tell them just do whatever you did before and we're just gonna take care of that. You're just gonna pay for it a bit more but the whole technicalities will be taken care of. So we have the working group session scheduled for later this afternoon and about like 15 minutes left for this session. So maybe it makes sense to just like have a stateless client discussion this afternoon because I know there's a whole bunch of new people sitting there that probably have questions. 3.30 to the end of the day, I don't know what that is. They have this room here to just discuss working groups. Jason Carver also joined. Give a quick intro with Felix as Mike. Hi, Jason Carver working on the Trinity clients. It's an all Python client. I came here to talk about stateless clients so I guess I'll save that for a minute. I mean, we can talk about stateless clients, but yeah. Yeah, does anyone have anything else? There's two, three chairs. Oh, yeah. The mics don't reach that far. Hello, hello. Could you talk about EIP 1559, which is the idea of having the gas price set at the protocol level and it's burned instead of given to miners? Yeah, sure. So the basic kind of description of the proposal is that we basically add a kind of negative components to the block reward. So that basically says that we have some in protocol fee and within the context of any particular block, you can think of it as a constant, but the constant adjusts up and down over time. And basically, the miner of a block has to pay some amount, or alternatively, the block reward is reduced by that amount, which is equal to that fee multiplied by the number of guay of gas used. And this obviously if this kind of, whatever this burned fee is, so it'll push up the minimum gas price that the miner is willing to accept by roughly the same amount, right? So if the fee goes up to 10 guay, then the miner is not gonna accept transactions that are less than 10 guay because they're not even gonna be not profitable anymore. And the idea would be that we would say, increase the gas limit of a block from say 10 million to maybe 20 million or more, but then we would have this fee be adjusted according to an automatic algorithm that basically targets the average usage of a gas and a block to be at the 10 million level. So if a block, sometimes blocks will be bigger, sometimes blocks will be smaller. Whenever blocks are bigger, the fee goes up. Whenever blocks are smaller, the fee goes down. And so at equilibrium, like there's some kind of mandatory fee that kind of gets burned and miners aren't willing to accept below it, but the end blocks are on average gonna be like a half full or one third full instead of being completely full. The reason why we want to do this, and there's actually a few different reasons, one of them is that the most fundamental one is that currently fee markets work really horribly. So one example of this is that on average, people tends to overpay by a factor of somewhere between three and five or higher than five for transactions. And you see people talking about a trade-off of you either pay five times more and you get included right now or you pay the minimum and you wait maybe a minute and maybe 10 minutes. But in reality, that waiting is completely socially unproductive because it's the same load to the chain no matter if it happens now or if it happens 10 minutes from now. And also just like it adds a lot of wallet complexity that people have to like think about oh, is this gonna be a slow transaction? Is this gonna be a fast transaction? So EIP 1559 fixes it by making the calculation to a minor much simpler. The calculation basically is if there is a transaction that pays the burn like whatever the burn fee is plus whatever amount kind of overcomes the extra risk that my block will be an uncle and that value has been estimated to be about one quay, then I'll accept it. And so clients would just like calculate the fee that they pay based on a simple formula and it'll reliably get into the chain. And so users will be able to get like pretty reliable next block inclusion of any of their transactions even if there are like short-term spikes and there's like suddenly two times more usage this minute and like two times less usage the next minute. So it's intended as a kind of fee market simplification and a user experience increase because like users just won't have to wait anymore. What does that do for users who are willing to let a transaction happen sometime over the next day? They're very price sensitive, say. Yeah, so users will be able to still like basically say, I wanna pay a transaction I'm gonna pay a gas price which is lower than the, let's say for example, I as a transaction sender see that the current fee that's being burned is like pushed all the way up to 23 quay because like I don't know, fair one is doing stupid stuff. Then I as a user know that fair one doing stupid stuff is exceptional and four hours from now chances are fair one will stop doing stupid stuff. So I as a user totally can send a transaction with a gas price of say nine quay instead of 23 and whenever fair one stops doing stupid stuff to get the burns gas price we collaborate down to below nine and the transaction will get included. So that's like totally fine and intended behavior that you can do. So I want to comment on this that a lot of times we're thinking about like the one user sending one transaction which is okay, you know, you can wait or you can do it within like you pay a bit more but the real problem is that for the people or for organization who actually depend on sending tons of transactions all the time and for those people it's crucial that the things work well because they can't just say, well, we're gonna close our exchange for four days until this whole things blow over, right? Or... Right. We still have some time? Under topics? 10 minutes, any topics? Yeah. I want to bring up this idea that in order to reduce storage requirements on nodes we like actually make more progress on implementing a mechanism where we only require nodes to store say the last year of history and for anything older than one year we figure out some way we're not where each individual node only needs to store part of that so that we basically instead of node storage being 200 gigabytes, it can just be like state plus 10 to 20. Yeah. When you say splitting it, you don't mean like... I mean like... You should go to Peter. Sounds like sharding kinda. Is it sharding or chained or different? Yeah, so I guess this kind of was one of my proposals from way back in I think last year, or this year January, last year January. So the idea was that currently a lot of the Ethereum's chain data about 150 gigs give or take is just immutable past chain. For example, past transactions, past blocks, past receipts. And many people don't really need this. I mean, yes, it's nice that you can dig up your transaction that you did five years ago, but do you realistically want to do it? Now, if you don't want to do it, and okay, that might be an interesting thing for you to want to dig up your transaction from five years ago. I definitely don't want to dig up your transaction from five years ago. So essentially it's just, we're wasting a lot of space for a lot of people that and it doesn't really give anything back. And then the idea was that can we somehow say that, okay, we're going to maintain the last some timeframe, maybe a year, and maintain everything on all full nodes, but everything that's older than one year, let's try to put it, stash it somewhere else and make it available. And generally, I think most of this is doable. So it is not even a hard problem to solve. The tricky parts come, start to appear with, for example, logs because the, so contracts, smart contracts use logs as a cheap data storage. And then they just, the DAP filters for the logs. And the issue is that logs weren't really designed for this, but it wasn't really explicitly said nor limited that this is not the intended use case. The intended use case of logs was to raise events. So your smart contract does something, it raises a few events, and then you react to those events. And yes, it's nice that you don't have to react immediately, you can react a day later, a week later, a month later. And that's perfectly fine. The problem is that, for example, there are DAPs, kind of like the Akasha social network, which uses logs to store the posts themselves, which means that when you want to list the posts that you've written or your feed, they will actually scan the entire blockchain for data. And these are the use cases that get really, really broken because all of a sudden you have a node that wants to access all the time, the history going back to the past five years. And this is really the challenge that needs to be solved. On the upside, however, the nice thing about this chain, that's chain data, is that it's immutable, which does mean that, I mean immutable, in theory it's not immutable, but in practice due to the proof of work, you won't have a reorg that's a million block deep. And this kind of means that you can move this data onto an HDD, so you can actually make store it relatively cheaply. However, I still say that if we want to enter this stateless clients territory, then well if the blocks all of a sudden becomes a hundred times larger, then we need to solve this because even on an HDD, you don't want to put a hundred times more data. So it's an interesting problem definitely. So something interacting with the next section of topics, how does the finality gadget figure in with the chain data? Do we need to keep historical data beyond the horizon of the agreed upon finality? And how far, you know, how important is that? For a finality gadget, you mean like the ETH2 finality gadget for each one. And for consensus purposes, you definitely don't need to keep any history older than about eight months. But in like one year was the period that I had suggested we make kind of mandatory to hold. Sorry, so the one proposal that I have for solving this receipt issue is that it's actually a very nice problem to just solve with a crypto economic light client market, right? Like I want a complete list of all the transactions that satisfies some particular bloom filter. I ask you, you give me a complete list and you sign that list. And if someone else disagrees, they can make a mergel proof and they can slash your deposit. So like basic turning, like making something like that. And the other benefit of that approach is that instead of requiring like you to scan a bunch of headers locally, it's basically a few rounds of network messaging. So could be also interesting to look into prototyping in the medium term. So maybe one thing we could do because there's so many people in the room who would just like ask like, a lot of you guys are depth developers, like who here needs access to logs from beyond like, you know, last three months or something like, whose dApps actually use this stuff? You can just maybe raise your hand if you need access to all the historical stuff. I see about 15 hands. Yeah. So what about transactions? Whose dApps use transactions? Could we also get some information on why those logs are needed? Well, maybe later. Because it's cheaper. Yeah. But that would actually be a really interesting question like to a lot of client developers, like which dApps are like, heavy on like actually accessing transactions, like transaction by hashed, transaction by block and index. Who uses those? Does any developer is there and wants to talk about it? Please come over inside this circle because this, those mics has a cable so we just come bring them. So just for the record, I saw two hands for the transactions. Three actually. Three, okay. Well, cool. Okay, so not a lot of people you actually use a facility to like access past transactions is what I'm getting. So from this room. From this room. It is not maybe not representative. Just to expand the reason why Felix was actually asking this question. So we've been sitting on a new feature, for example, in Gath, that so currently all full nodes, all Ethereum full nodes maintain an index, a transaction index that's simply saying that this transaction hash is located in that block. So essentially it's just a hash to block number mapping and it seems like a stupid cheap thing except when we count that there are 500 million transactions it turns out that that's 200 gigabytes of data. Sorry, 20 gigabytes of data. So essentially every full node is storing 20 gigabytes of data just to be able to say in which block a certain transaction is. And if full nodes don't need it then we can immediately wipe out 20 gigs of data that can be reconstructed at any point in time. And that's the reason Felix's question was really, really good because we always wondered is anyone actually using it or can we wipe it? Should it be the default or should it not be the default? So it's actually reassuring to see that people don't need it. Yeah, come to the mic right there. Hi, so I work at Synthetics and so we obviously have what's like a dex, a bit like Uniswap. Excuse me, introduce yourself, please. Hello, my name is Justin Moses and I'm the CTO at Synthetics we're a DeFi app and we do indeed need to know the transactions that users have had over time in our dApps. However, we have been using something like the graph and that's actually been successful. So we're happy to use decentralized services like that that can track a lot of transactions instead of having to do like at point go back and look at individual one. So I guess what I'm hearing is that actually all the people who might actually want access to transactions will totally be fine to like externalizing that to some service provider because it's not really something of anyone like a lot of people expect from the node to like do for them to like get the transaction. I think Akash would have to like rewrite their stuff but I mean. No, that's for the... I thought they were log heavy. But if we kind of like get a crypto economic like client implemented then they could just like replace one to two lines of code and it would just work as is. And it would probably work even faster than today. But I was just follow up question for you on the DeFi thing. When you look up transactions don't... Is that ancient transactions or is it transactions for the last week or month? No, we'll have all of them. So there were times that we're going to go back to look at even, you know, years worth. Interesting. I wanted to throw this out there real quick. We're about out of time, right? What's it, three minutes? Yes. So I personally think ETH1X, ETH1.0 is getting stronger. There was a period of kind of like, you know, ETH2.0 is the new stuff and it's sexier and stuff like that. But now it's like the Ethereum Foundation has dedicated at least $8 million to ETH1.0 and 1X research development, et cetera. And that is being spun up. They did that back in May to do it for the next year and there are teams being spun up as we speak, coordinators being spun up as we speak. So there's a lot of good stuff happening. Some of it's happening in the background. A lot of it you're going to see in the forefront real soon. ETH1 forever. Wait, but is it really going to be forever? That's actually the next topic. ETH1 to ETH2 transition. Just one last bit about that. I mean, oh, he's gone. But the guy was just standing there. Had like some pretty interesting comments. Oh, come back still there. Yeah, well, it wasn't about you. But just like in general, I think that developers, if you have like these peculiar use cases, trying, I don't know, like on ETH magicians or somewhere like that, to try and like explain, like answer those questions and like, oh, our DAP is like kind of weird because it uses logs in this way and we'd like the note to do that or not do that. Like if there were like ETH magicians supposed kind of either complaining or requesting features, I think that would be pretty useful. I mean, like 1509 was like one I was very interested in already. And I imagine a lot of DAP developers care a lot about that because gas is lifeblood and having a forum to be able to talk about that because you don't really want to jump in on EIP unless I have something meaningful to say. So I feel like ETH magicians is kind of that more general, less formal forum where you can just kind of write a post and people will see it. Yeah.