 Welcome to the E2 Q&A session. What we're going to do is I'm going to ask some very, very basic questions just to get you all up to speed. And then I'm going to open the microphone to the crowd. Just a few things. So the whole thing about validator rewards and so on, I don't know if you want to dig into it. Otherwise, I think it's consensus that we can skip those questions. Makes sense? Yeah? Cool. All right, so maybe let's start with Joe. Can you give us an update over the interlocking and what you achieve and what that exactly means? So we had this lock-in is what we called it. It's kind of a workshop for a week out in Muscova, Canada. I guess we got somebody from Muscova here. Nice. All right, so basically the idea was that we basically trap all the developers in this place and make them join our E2 cult. And then we'll work on our clients together and reduce the round trip time for communication for everybody on getting their clients to interrupt. Interrupt is essentially being able to talk over LibP2P to the individual clients and gossip at the stations and block proposals with validators kind of like shared among the individual clients. Everything that you need without syncing. And so we were expecting, I think, eight clients came, seven got to interrupt. And that was really exciting. I think we thought that there would probably only be three or four that would achieve interop in that time, which means that all the individual clients are really doing a great job. So I think that that gives us more higher probability that we'll make some early Q1. Don't quote that. But that's kind of like the idea right now is where people are saying when they want to see a beacon chain up. And so I think there's higher probability of that now. So just to give an update, does everyone know what ETH2 is formed by? How many clients do we have? And all of that, I'm asking the crowd. Because otherwise, we should give a primer. OK, apparently everyone knows. All right, so I don't know. So we have a couple of the clients here. We have NIM booths. We have a lighthouse. I just, like, I can't. What? The ETH2 specification. The ETH2 specification team. We have a load star as well. And we have anyone else? Ardynis? Oh, sorry. Yeah. Trinity. OK. Quilt for prototyping phase one and two. Quilt is here. They're prototyping phase one and two. On Lighthouse. All right. Which of the clients is the one that's most advanced? Loadstar does a really good job at fuzzy tests and other clients. We blow them up. Like, yeah. Lighthouse thought they weren't going to panic. We made them panic. They fixed it. We made them panic again. Loadstar best client of all time? Oh, no. That's what we got? Best in the browser. All right. Do the existing clients here want to give a really quick update? Maybe we start with, OK, we don't start with you, Yasek. Yeah, Nimbus. First of all, yeah, thanks to Consensus and the organizers of the interop. It was really, really, really good to get together there. Work out a lot of the practical little details that have to be worked out when you're trying to make two pieces of software work. You'd think that it's easy like there's a spec. Everybody just writes the code and then puts it up and it works. That's obviously not the way it works. We have a pretty complex product coming out. And there are very many basic building blocks that have to fall into place. Like, that's the networking. That's the cryptography. That's the spec itself. And I think really valuable was finding out and using each other's clients as stress tests as a first sort of fast testing platform so that we could feel out where the other clients have put their effort the most. And then that's how we could improve our own client to sort of match that level. So I think interop was great for this. We found lots of bugs, lots of issues, lots of places where the thinking was not consistent between the clients. And that was sorted out during that interop. I mean, there was a lead up to the interop where clients were sort of paring off and trying to solve some of the initial things to make it smooth, right? So that's probably why we saw so many successes right on the spot as well. Thank you. Artemis? So after interop, we ripped out our Rust implementation of libp2p. We're the Java client. So we went for the JVM libp2p that was built by Harmony. And we put that in. And now we're kind of working on some sync issues. And some implementation issues to kind of get ready. We're turning our team over to start working on a sharding client. We're leaving our beacon chain client with a production team who's going to kind of get more hard in for production. And yeah, that's kind of where we are. We're going to work on a new sharding client that's going to be built in Kotlin. Just super set of Java and also compiles to machine code. Loadstar? Yeah, after interop, we started. We kind of broke into several different subteams. We're working on getting loadstar kind of productionized. We have a lot of optimization to do to really get to this production level that Danny was talking about. And we also were kind of looking ahead at light client work. So we started to prototype some of that work out. And we're hopefully going to start contributing back to the spec a little bit. Thank you. Lighthouse. We're going well. Our main focus at the moment is to target feature completeness. So phase 0 feature completeness. We're getting quite close. We're just sort of polishing off the integration to F1 to the deposit contract. We're also working on some slashing protection for validators. So this is kind of the idea of feature complete being that we can run the protocol. We have everything we need to run. And we can run off the deposit contract. But not necessarily can we run for, say, a year and use reasonable disk usage. So that's something we're working on now is trying to figure out how to optimize our database storage so that we can run for a long time. Yeah, so I think the next thing for us is we really need to start testing more public test nets. We need to get one of our own public test nets up. And we also need to start doing test nets with other clients using the F1 chain as the source of deposits. So we can test this dynamic validator set because this is the thing that we didn't test at interop. So yeah, that's us. And we're also really trying to help foster the cool team to make sure that they can keep researching the next phases whilst still delivering phase zero. Yeah, so Trinity was an interop. We had a lot of fun working with everyone. Since then, we've been just sort of merging in all the things we found there, nothing major, but just a lot of tiny little things, as I'm sure other people found. The big thing for us will be performance. So Trinity is a client written in Python. So while it's great in terms of being readable and very approachable and accessible, we definitely have to deal with sort of this performance penalty we accept by choice. So that'll be a lot of our work moving forward for the next couple of months is making really fast interpreter code. Cool, thank you. I am losing my voice, so excuse me. So the Quilt team has been focused on doing research around phase two, building on execution environments and doing some early prototypes on basically the stateless paradigm. So we are already building execution environments that are operating with in a stateless way. And we built a simulation within Lighthouse's kind of base code base where we now have shard chains or multiple shard chains interacting with the beacon chain. And we're able to actually run the state execution on that as well and run these execution environments on it as well. And so we have been, I guess kind of what we're doing next is expanding that to include more simulations of what the state market might look like, advancing on what may be EEs for some of those contract frameworks as well. Yeah. And do you guys want to give an update? Yeah, we're working on all sorts of things. Adding, add some additional tests to phase zero. Justin Drake did some excellent work making phase one look really nice and easy to reason about and elegant. A lot of research and thought and conversations around phase two. And more recently, some conversations around potential alternate structures with phase one that might allow more rapid cross-linking and better communication between shards. At the same time, working on some standards, helping drive the standards process on BLS. Carl's been looking into key store formats, pushing on the deposit contract stuff, which is formally verified and ready, but we're waiting on the BLS standardization to launch that. And opening up the conversations around the coming public test nets with the clients and always kind of providing support and answering questions on the client side. So just keeping everything moving forward. Yeah, and I've been coming up with this and alternate structures for faster cross-linking that Danny mentioned that we should have some information out about maybe very soon. Also, data availability, there's been a lot of progress on different kinds of data availability proving mechanisms. There was also that interesting coded merkle tree paper from a couple of days ago that lets you do erase your coded roots with much smaller fraud proofs, which is also really nice. Also, we're thinking about a fee market simplification and coming up with ways to deal with edge cases and reel your markets and all of those issues. I guess I've been thinking about optional upgrades. One of them is the idea of a secret leader election. So right now, beacon proposers and shop proposers they're known in advance that they're going to create a block and then these leader based systems suffer from denial of service attack surface. And so what if we can have a system where you don't know ahead of time who will be creating the next block that would be much more resilient. Another kind of optional security upgrade that I've been working on for some time and it's still ongoing and we're making lots of progress is the VDF project to have unbiased randomness. And I guess I've also been thinking maybe a decade into the future on E3 and how to make a firm quantum secure. At least 10 years, I'd say. Dan Bonesa's 30. But Dan also said, don't quote me on that. All right, I'm going to open up now to questions from the audience. Who want to go first? You mentioned you're doing a lot of work on fee markets. One thing that I would love to see is a bit of a high level abstraction so that you can specify how fast you want a transaction to happen. Like I want it to happen in a minute or five minutes or an hour or I don't care. And then it bills you like how much it should the market rate would be for that. Like you could pay more and then it would refund you the difference because right now it's guesswork. Like trying to guess the right amount of gas. I think this is the sort of problem that EIPA 1559 is meant to basically eliminate. Like users would in the normal case see their transactions just included immediately. Anyone else? Questions? Okay, we have cable so can you come over please? My name's Chi, I actually have questions. So the number of straws I'm trying to understand will start with 1024. So I'm wondering why choose this number? Especially together with a large number of validators. There will be significant costs or number of costs especially at the starting of this or two. Is there any concern that maybe we may not be able to reach so many nodes of validators at the beginning? And so this is one of the things that this kind of faster cross linking proposal might actually end up changing potentially. But I think in general the justification here is that we're trying to kind of, like there's two different kind of constraints that we have. One of the constraints is the load on the beacon chain. So we want the higher the shard count the more scalability but on the other hand the higher the shard count the more cross links on the beacon chain and the more expensive the beacon chain will be to process. And the other constraints that we have is just the higher the total throughput of the system the higher the harder it is to be a block explorer, the higher the risk that some historical data will eventually just end up being forgotten completely and so forth. So the scalability that we're expecting, like the data throughput that we're expecting is a number though. We've quoted numbers between one to 10 megabytes per second and that seems kind of to be close to the upper limit of what people are fine with so far. And for the load on the beacon chain like we've done numbers and it seems like beacon chain blocks can in the worst case be verified in like about a second now is it? So once again it's like that much but not more sort of thing. Yeah I think there was like four million validators were starting to encroach over a second so that's way above what we'll get. Yeah and like I personally want to be conservative with beacon chain numbers so like I want to I personally really want to avoid the mistake that we made of like an ETH one where like because the chain was heavy like so few people wants to run a node like I would prefer a beacon chain node to be just a thing that you would run by default rather than like running only when you need it and that does necessitate kind of fairly like stringent performance requirements. Yeah I agree, things are the faster how it does too. I mean on the concern that there won't be enough validators to start with we're essentially handling that by only doing the genesis once we have sufficient validators so the current numbers are 65,000 validators two million ETH. Another idea that we're considering is that even if we go for a thousand shards in the very long term we could slowly enable them, ratchet it up so start maybe with 64 shards and then over time have it increased because at the very beginning there's gonna be very little activity and it might be worth using less shards. Yeah speaking of block times or block processing times it's important to remember there are cases where we have to process blocks which are not necessarily good ones that might be spammed, might be maliciously constructed that takes time away so we need to have a little bit of a buffer and that also affects the way that messages propagate in the network so we're using an epidemic network which basically means that all the data is replicated all over the network and the longer it takes to process or the longer a full block processing takes the less validation we can do ahead of time meaning that the more risk there is that the data that the network will be flooded by bad data or that it will be slow, either of those two right so there's also that argument that if we can do more good validation upfront and still maintain low latencies we also gain in other parts of the system as a whole. Yeah so I'm wondering what the plan is in general terms for handling of ERC-20 assets in particular transfer functions as a developer it's somewhat difficult from user experience standpoint and there's been many proposals ERC-223 for instance as an interesting proposal but as we transition to E2.9 I'm curious what the general plan is for handling those assets because there's just hundreds of them just migrating them to 223s and sort of feasible curious your thoughts? So kind of unrelated to the whole transfer from thing which I agree needs to be reformed somehow another issue like another reason why things need to be reformed anyway is that assets are probably the major category of application that we expect like any particular token will need to be accessible in all shards like most applications like realistically unless you're in the top 10 you could just live inside of one shard and you could expect people to just move their points to your shard, do stuff inside of your application and then go somewhere else but tokens need to be able to be moved around everywhere and so the idea of a token as being a balance of one particular model with a contract is not something that can really survive and so you'd expect token holdings to themselves kind of be more like individual contracts and when that happens you pretty much it's an opportunity to kind of reform other things about how the standard works as well. Another question I had just to go back to Justin you just a minute ago mentioned that it could start with 64 shards previously I hadn't heard this so can you expand on that a little bit more as to how that could happen because previously the response was that you'd be under utilizing it but is that really the only consideration that you'd be under utilizing the beacon chain the throughput that's made available like why couldn't we start with a lower number and then scale up or would you have validators sitting out and not really activated? I mean the validators do all sorts of things one of them is crosslinking and so technically speaking to reduce number of shards is pretty trivial like when they do the crosslink some of these crosslinks will just be hard coded to zero and this zero hash will mean that there's no data flowing through the respective shards but you know these validators would still be providing value for example they'd be contributing towards the finality of the beacon chain and all sorts of other things I mean one good reason to start with lower shards is lower counters that maybe more network stability we wouldn't have so many shards subnets that's a nice thing also it allows for services like block explorers to give them a bit of breathing space to ramp up to bigger numbers as to how we increase to a higher number of shards we can have some sort of automatic schedule or we could kind of push this to the social consensus layer with hard folks So this is a possibility that it could start much lower? Yes and I think it's the natural thing to do I think Yeah because I've heard from many people and I'm sympathetic to the idea that block space should be economically valued or it has economic implications and it should be valued highly and if you have an overabundance of it then you end up devaluing the transactions that are contained in it I mean that's a good point if we have fewer shards it means that we can really test the new gas market with EIP 1159 which by the way is awesome it just seems to solve so many problems Hi I'm Kent, I'm the head of R&D at Shapeshift and I'd love to hear more from you Vitalik about the use of phase one in the data availability context this excites me a lot because it seems to me that a lot of the really vexing challenges with E2.0 or R&D phase two with sharded EEs and trot cross-stream communication this could effectively really give us a very nice bridge where we have layer two solutions with Starks and Snarks so kind of a two-fold question to what extent is the data availability aspect being specced out and built with those particular use cases in mind and also if you could shed some light on the big research challenges and questions as that aspect is built out Sure, so there exists the scalability solution already like Plasma Group is doing a demo of it like a severe, I think at DEF CON they call them optimistic roll-up and optimistic roll-up basically works by publishing transaction data in a very compressed form on-chain but not doing execution on-chain and optimistic, but because you avoid executing you only have a little bit of data and especially because data is very cheap relative to computation even on the ETH1 system you can achieve scalability of like 10 to 100x like the theoretical max throughput is somewhere around 3,000 TPS if everyone were to just use optimistic roll-up for moving coins around and ETH2 could potentially make optimistic roll-up even more powerful because instead of using the ETH1 chain's data store to store this data to make sure that it's available you would be storing that data on the ETH2 chain and you would basically kind of feed into the ETH1 side and merkle proofs that prove that hey, this data actually has been accepted by ETH2 and so if you imagine like even if say we reduce the shard count and even the scalability at the beginning is towards the lower end and say there's one megabyte per second of data availability then if we talk about say 20 byte transactions then that's 50,000 transactions a second that could fit inside of a roll-up and that's a system that could like the logic for kind of doing it is something that is being developed already I think the main challenge that would need to be solved is that you would need to have at least a minimal sort of fee market on the ETH2 side so that optimistic roll-up relayers would at least have the ability to kind of publish their data and that is one of the things we're actively thinking about and the other challenge is if you do want to use phase one as a data availability layer on ETH1 you need to go down the finality gadget road and expose ETH2 data ETH2 state routes to ETH1 so you can make proofs against it so depending on as we talked about in the last session depending on kind of parallel road maps and how long expected things you're going to take it may or may not be worth it Right, next question Martin Hey Martin You need to actually jump everyone Yeah, so I came a little late so I hope this question hasn't been asked if so, I'll just excuse myself but the question is around the process by which a particular execution environment is chosen for a shard if there's like some auction going on or renting that space and also what might happen if you need to change something about the execution environment like forking it or something like that So within all of the proposals we're considering so far I think blocks would have the ability to contain space for multiple execution and kind of environment spaces in them and so basically like you would expect like it's ultimately the block proposal that chooses what to include and different people would be able to bid but like you would expect at least popular execution environments to have a presence and be executed even in every slot in the shards in which they're located Yeah, so to be clear the execution environment is defined on the beacon chain and thus available on all of the shard chains and so when you make a block you're kind of signaling this block is for this execution environment and thus the data in this block or chunk of data maybe multiple is for that execution environment and so you can kind of think of say this block was for execution environment A this block was for execution environment B and the next one was for A like the actual block chain of A is like this subset of the actual shard chain and on the forking I think so the nice thing about it is you could actually define migration paths such that you could deploy a new EE and have some sort of migration from previous EE to new EE and I do expect these things to kind of emerge in a standards way kind of like in the ERC standards but I could also see that certain EE's especially like the ETH-1 EE could still maybe be subject to hard forking to upgrade and manipulate depending on the kind of social dynamic around that Yes, yes, so I'm saying that would be A like to actually change an EE would require kind of changing the actual code of the protocol in an irregular state change there are options to migrate and upgrade EE's without doing that but there it might still be like a social tool to coordinate and manipulate these things Will the EE's be permissioned or like you're not gonna so if you wanna add a new EE do you need to fork anything or can you just add new EE's? Large transaction for your deposit requirements so the beacon chain doesn't get filled up too much but like anyone would be able to do it So you're not auctioning them like some other guys and are you expecting applications some of the bigger ones to have their own EE? Possible Yeah, possible A lot of like the reasons you might have an EE like it doesn't actually make sense and it might just make sense to operate within an EE but I do funny that you asked that I have thought it could be like certainly a way to advertise like if the casino has their own EE come join us Yeah, like gaming EE I mean there's probably some data that we would wanna store that no one else wants to store Yeah, or maybe not data but more like how you optimize the structure of accessing certain things Yeah, a fraction of a second block time would be really good Yeah, I think it might be like another tier of developer that makes an EE it might be someone that's working really deeply in protocol research or like zero knowledge proof research or something they really know what they're doing and they can't do it anywhere else and then they'll go to that but then you're like kind of typical developer that's at the application layer would probably wanna stay away from it Any other question from the audience? I mean while we're waiting for a question I mean maybe to put a bit of cold water on the whole EE I mean the EE's are a beautiful piece of layer of abstraction and very useful in the context of migrating in EE for one into EE for two but there is also the possibility that EE's add extra friction between them in addition to the friction between shards and there's also the possibility that we'll have unsustainable EE's so EE's for example without a state rent similar to what we have right now in EEF-1 and if they become popular then it's all sorts of complicated questions so it is possible that at least in the short and medium term we will roll out with some sort of default and shrine EE some sort of native EEF-2 EE which just acts and behaves like somewhat like EEF-1 does today and you won't have that much room to play with in the medium term and then in the long term we open up the EE's to everyone right but how do you manage that process of opening up to everyone well that will happen under consensus layer and one day we say hey anyone can publish an EE so going back to the permissioning question maybe no one will be able to publish EE's for some time I'm not saying it will happen but it's a possibility alright questions how much thought has there been to transparent sharding where that developers don't have to figure out which shard they want to be on because if you give them that choice maybe they'll just all choose to be on the same shard I mean that's one of the really good benefits of opening up to EE's is that all this innovation on transparent sharding can be done at the application layer and us as consensus designers we don't have to bother about that and so if you're near or polka dot or whatever comes up with some cool innovation that can be integrated with fairly little friction and there's already ideas out there for using the shards as fungible data availability resource that can be used and kind of decoupling the logical centralization of the application from the underlying resource which is sharded so who decides where the application goes like does the application get to decide what shard it goes on or does it have no control over that I mean at the lower level the EE lives in the beacon chain and it's made accessible to every shard but then it becomes a question of for the dev developers how do they want to either restrict themselves to a specific shard or have some sort of virtual EE or virtual execution environment which is separate from the consensus level concept also within EE development there's a few things you can do and when this opens up as well I think there are a couple questions so one question is do you have to deploy to the beacon chain and do you also have to deploy to multiple shards or do you automatically have it available across all the shards so that's one piece but also within your execution environment you could technically build some basic level of scheduling yourself as well saying I will only operate certain certain transactions on shard five for this series of epics or shard five, six and seven for this series of epics so there's a lot of flexibility in how you can write these and how you may be able to enforce load balancing yourself just within the semantics behind an execution environment are we gonna retain the financial Lego that we've got a plug and play between composability are we losing that or compromising that with the sharding from a tooling perspective well right now you can plug compound and die together or something if you don't know, if your Lego blocks are now on different shards and they can't interop without cross shard links which might be slower and have some penalty so I think that, I mean I get the concern but I think all of the kind of DeFi integrations like for just as one example I've seen in practice I don't think any of them really break from asynchrony one of the reasons why is that half of these integrations just happen to do with like one system containing a token from another system but tokens are one thing that's like they're probably easier than anything else to move across shards you just like yank the contract over and like one Merkel branch and it's like somewhere else wherever you want it to be I think the thing that and then if you look at like even Uniswap for example it just moves tokens around you move the token to the Uniswap shard swap it, move a token somewhere else so most of the integrations do kind of work that way the ones that don't are basically the ones that involve kind of synchronously calling other applications but like even in that case a lot of the calls do tends to be asynchronous or at least would be perfectly fine if you did them in an asynchronous way so at least given the applications that we know about I think that's a kind of a smaller risk than a lot of people think I mean I could see like execution environment fragmentation being even a bigger issue than shard fragmentation if not handled correctly and it's definitely something that we're thinking very actively about how to smooth over I mean concretely it might turn out that cross-chain communication is actually faster on if two than an if one transaction if one on average is 15 seconds and two ways that we could have fast cross-shark communication is one we have this new proposal where we have cross links in every slot on the beacon chain we might have cross-chain communication in just a few slots and then the other way kind of which is generic is to use optimistic cross-links where basically you have some sort of off-chain mechanism to gauge the probability that some cross-link will eventually make it into the beacon chain and get finalized but you don't have 100% probability that will happen and then you can design your application around that and benefit from the low latency Can you do any some kind of automated garbage collection and start moving smart contracts that wanna be on the same chain together so that they're faster? So I generally think the space across interaction around multiple shards I think there should generally be a set of tooling within the HLLs or TSLs that are used so with insulidity, Viper, anything else so from a DAP developer perspective if you accept asynchronous as the basic construct of communication across shards you can have tooling that does that through message passing you could have tooling that does that through two-phase commit system if you need some type of atomic transaction you could have tooling that so I think from a DAP developer's perspective you're going to have more tooling in these languages that you would go ahead and utilize for the problem that you need Yeah, how do the archive nodes look like for F2 and are block explorers ready for this? If you're a block explorer you would pretty much have to download every block on every shard which is one to 10 megabytes per second multiply by 86,400 we're talking 8,200 gigabytes a day so it's like you have more hardware requirements and then of course that's one issue the higher required requirements going up and another issue is that you'll have to deal like with ETH2 basically more things are being kind of layer twoified than before so you have execution environments and then within execution environments we're also planning to push forward on transaction abstraction and so now you don't have EOAs and contracts as different costs of things anymore and then you have cross-chart transactions and different ways of implementing them and so your responsibility to understand popular layer two protocols will realistically end up increasing and that's another thing that you might need to watch that you'll probably need to watch out for so I guess in general for being a general purpose block explorer is realistically one of the things that will get harder. All right we have time for one last question make it quick. So as an adapt developer that has an ERC-20 asset is there anything specifically we should do or maybe not do as we look towards YouTube one out? Right now no, in the future when I think I'm inclined to just say like don't think about the problem now and like in the future as the execution environments and if start becoming more nailed down then you'll be able to think more about like how to design the kind of token and like application specific details around a cross-chart context. But no migration specifically. Like there is a possibility you'll have to do a one-time token upgrade but that could also be done with a wrapper. Knowing that tokens are the most used function in the current chain is it not worth considering hard coding them and like making them much more efficient than they are right now? Or taking a snapshot potentially. What hard coding make them more efficient? Well because right now they execute code just to do basic functionality of the token. The code is like fairly as small right? Execution of EVM code is not like a dominant factor of like watching over head time. The one, okay so the thing that I would look into like if we want to like really optimize tokens heavily things that I would look into. I mean one is to just like review like the whole ERC-20 transfer from architecture and like think hard about do we want to build into like higher level of language standards a more explicit like pay for token with function calling strategy. Another thing to look into is that actually one kind of functionality that I think we'll likely put in is the ability to store pieces of contract code on the beacon chain and then have the ability to just reference them by address and this would like this would prevent things like account abstraction like tokens not being abstracted or tokens already being abstracted and all these other things from blowing up witness sizes because you'll instead of having a piece of code in your witness you'll just be able to point to an index. So a lot of those gains have kind of been factored into the existing design already. Another thing to look into that would it be kind of redesigning user accounts so that you would have the benefit of kind of being able to like you would be able to get the efficiency benefits of being able to hold like many units of a token within one particular package that could get passed around between shards but like there's trade off stare because from a privacy point of view if you want to improve your privacy it actually makes sense to kind of separate out all of your token holdings from each other and even have multiple accounts per token. So it's like in general I feel like there is a way like the optimizations are needed but there's generally ways to make the optimization without enshrining too many things. Well it's important to note the notion of owning ether actually becomes a layer two concept in EEs. So the only thing that owns EEs is the E has a balance and then there's some sort of presumably some sort of account model and state model within that EE that's defined that allows you to actually have a right to a portion of that ether. And so we've actually instead of enshrining tokens we're now like deshrining ether within the. All right we have to stop here. Sorry everyone. Do you want to talk? Less hard coded things at the end of the day is better. You know so like not having like a ERC hard coded just gonna be better because less hard forks less nonsense to deal with from a PM perspective.