 Aloha, I'm Dan O'Farran, and I'm Tim, product manager at Pegasus. I am a network protocol engineer. And our talk today is to propose and talk about the proposed cadence for network upgrades, which you might also call hard forks. And changing it from the old model, which is kind of like buying a plane ticket, everyone gets on the same plane, you pick seats, you do a months and years in advance, and things don't change, it flies out or it doesn't. And instead of replacing it with a train model where you show up with a train platform and whoever's there leaves and whoever's there doesn't. But first, let's start with a little bit of history. What did network upgrades look like before this? It was a lot like a family vacation. Frontier, Homestead, Byzantium, Constantinople. Mom and Dad would tell us where we're going, we'd pack the car, you'd sneak in your favorite toy, we'd drive and drive and drive, we'd get there eventually, and it would all be happiness and sunshine in the end. Except when it wasn't. There were a few network upgrades that were a little more emergency, all hands on deck type situations. The first two were the Shanghai attack, and the second one was when there was a security flaw discovered as part of Constantinople. In those situations, we all came together, we all got the network upgrade out in a timely and fast fashion, laser focused on what the problems were. So where does that put us today? So the beginning of the year, we proposed a plan for Istanbul where we would take everything from the beginning and we'd have landmarks along the way that we would do things like have the EIPs ready and all that other stuff, go from step to step down to waterfall, and it was going to be rainbows and sunshine, it was going to be awesomeness, right? Yeah, well like everyone in this room probably knows, waterfall doesn't work for software development. But we tried it anyway for Istanbul. And so at the beginning of the years, we set ourselves a bunch of deadlines. We would have a kickoff in January, give ourselves a good amount of time to review all the EIPs so that around mid-May, we could have a final list. This would give us two months to work on client implementations which brought us to mid-July. This way we could deploy the test nets around mid-August and finally have our upgrade go live on Mainnet by mid-October, which was supposed to be two weeks before DEF CON. We're working next week? No, we're not. And so how many of these deadlines have we actually hit? We hit the first one, the kickoff. And obviously a lot of things went wrong here. And one of the more interesting ones is that the Ethereum community had grown a lot since the previous upgrade. So when it came time to review the EIPs, instead of having only a handful to review, there were so many that I wasn't able to fit them on this slide using the fund they gave us. And so this means that around mid-May, when we should have had a final list of EIPs for the upgrade, we were still trying to wrap our head around all of these. They were in completely different stages. Some of them were early drafts while others had working test nets and there were dependencies across all of them. So this was not ideal and we obviously missed that deadline. So we actually got our final list of EIPs for Istanbul around mid-July, which is when we should have been having the client's implementation done. This pushed back the client's implementation to the end of the summer around August, which is when we should have gone live on the test nets and it pushed back the test net upgrades to October, which is when we should have had our main net upgrade. At this point, the main net upgrade will probably be at the end of the year or early 2020. And so as this was happening, a lot of people realized it was not going super well. And there were suggestions made around how we can make Berlin the next upgrade go much smoother than Istanbul. So we might walk you through some of these ideas that the community brought forward and try to bundle them together into what we've called the train station model. The first person to notice how poorly things were going was Alexei Kunoz, the brains behind state rent and stateless clients. And he posted a blog post where he outlined basically where things were at, where things could be and where they should go. On the first pick your outlines, what was going on with the first set of network upgrades. Everyone would come together, they would write their tests, but there was one special client that had special privileges. Alexei, the C++ client, was the one client that could actually produce the reference tests that were needed by all of the other clients to verify that all of their consensus critical code was correct and was going to function properly. So before anything could progress, Alexei had to have all of their stuff written and the tests written and then they had to go work with the other clients and work back and forth. They created a gating function which is why a lot of the network upgrades moved in lockstep before because there was one task that had to be accomplished. So what Alexei proposed is instead of focusing on as a group and having one client and doing everything at once, that we split these upgrades that details into various working groups. So it would have various areas of concern and interest that they would recommend proposals in. And in these network groups they would produce reference implementations for what they thought needed to be changed. Now what he proposed is a very important thing for the reference implementations. It was going to use a new tool written by Dmitri from the Ethereum testing team called retest death. What Dmitri did is he went and got all of the testing code that I left needed to produce these reference tests and he took out the need to have a left run these. So you can create a reference test from any client that implements the APIs that retest death uses. Geth has implemented them, Basu has implemented them and obviously Aleph has implemented them. So from these you can create your EIPs and you can target any of those three platforms right now to generate your reference tests to create an EIP to test. So we are not tied to just one implementation and one implementer to produce the reference tests anymore. The second idea that came up is the need for EIP champions. This is a picture of tweets from the Berlin All Core Devs meeting. And from their Alex different Alexei but Alex proposed that each EIP needs a champion and Boris echoed it, said yes. EIPs need a human being that we can go to and we can talk to and say, hey, what is the status of an EIP? One human that can call into the All Core Devs call and say, hey, state rent is going more into stateless clients right now so you won't be seeing this for a while. It needs one human who coordinates. So when we say, hey, we need more tests for EIP 2200 and we can talk to that person, they can go back to the developers and they can create those new tests. It is not just one person do all those things but if the EIP is small enough or the person is dedicated enough, it could be the person implementing it but it is one person who coordinates and is accountable and champions the EIP through the process taking it through the steps as available. And another big idea that was introduced this year is the concept of EIP centric for King. This was brought forward by Martin of the Ethereum Foundation. And the general idea is that instead of having all our EPs move through the various stages of the upgrade at the same time, you have each EIP progress independently and you only schedule them for an upgrade when they get to a point where they're ready to do so. So using this model, if you wanted to get an EIP live on mainnet, the way you would go about it is this. You first start by writing your EIP and then you'll go on all core devs to get what is called an initial acceptance. This means that the core developers are generally positive towards your idea and that assuming everything goes well in the next steps, they'd accept a PR for a well-written implementation of your EIP in their client. And this can be a really useful signaling mechanism for organizations that fund some of the teams working on these EPs. Organizations like the EF, like Modell-Dau or like Consensus can use this initially acceptance status to know that they're funding in efforts that has a fairly high likelihood of making it on mainnet. So once you have this initial acceptance stamp, you go and work on your reference implementation. This just means implementing your EIP against one of the major clients and then generate your reference tests using retest that. This would kick off the security evaluation period of your EIP where you'll want to reach out to people in Ethereum who are familiar with the bits of the system you're changing so that they can poke at your EIP. As you're going through all of this, you wanna feed everything you learned back into your EIP specification and specifically under the security consideration section. Once you've done all of that, so you've gotten your initial acceptance, your reference implementation and your testing, you wanna go back on all core devs to get your EIP move to accepted. At that point, all the other mainnet clients will implement it and it'll be scheduled for a testnet upgrade. Once that upgrade goes live, assuming everything is fine, then it'll be scheduled for mainnet and will be deployed on the next mainnet upgrade. And James Hancock put together a really good diagram to explain this process. So you see at step zero, you start off with your draft of an EIP and there the only gating function is you need to get it approved by EIP editors. This just means your EIP meets the basic EIP templates requirements. From there, you get a green light from all core devs or an initial acceptance, which means that they're generally positive towards your idea and that they'd accept a good PR from it in their code base. You start working on your implementation and then you own the testing for it. So not only are you generating the reference tests and trying to test both the happy path as well as the quirks and edge cases for your specific EPs, but you also wanna get in touch with the EF testing team and other people who are knowledgeable about what you're changing to make sure that all of the important security considerations get fed back into the spec. Once you've done all that, you go back on all core devs, get your EIP move to accepted and it'll be scheduled for the next upgrade. At that point, it'll move into last call and we'll keep running the reference tests against it all the way up to the test net block. If it goes live on the test nets and everything is fine, then it'll be scheduled for main net and finally your EIP will be live on main net. Finally, as Tim mentioned, the Ethereum community is growing. There's more people that we need to take considerations into rather than just the developers and the researchers who are creating these items. We have people running exchanges, we have people running their own personal information, we have corporations running nodes, and we have service providers such as Infura that run these Ethereum nodes. And we need to take into consideration that they need some standard amount of time to react to these node upgrades. So a few months ago I proposed EIP 1872, a set of upgrade windows to tell us when we should do our network upgrades rather than just picking a random day when we're ready to fork. So there were several recommendations in there. Some of these I feel stronger about than others. The first recommendation that I feel the most strongest about is that we should fork on the third Wednesday of a month. Kind of like how Microsoft has their patch Tuesday where their security patches will come out on a second Tuesday of the month unless it's emergency patch. That way, system administrators know that they should not take vacations during the second week of a month because that's when a lot of the work is gonna come around to testing and upgrading the clients. Similarly, node operators would know that the third week of a month is probably not the best time to take vacation because that's when a network upgrade could occur. Now these could occur in any of the 12 months, but the second thing that I recommend, this is more of a recommendation, is that we pick four specific months that we prefer to launch main nets, main net upgrades. Those will be January, April, July, and October. Those are three months apart, and those have the advantage of missing a lot of U.S. and European-centric conflicts around vacations and holidays so that the operators in those areas won't feel any conflicts with those. However, sometimes it's necessary to delay a network upgrade as happened with Constance and Opal. In those cases, the delays would go out in month increments. If we have to delay it, we pick a new third Wednesday of the month, probably the next month, to shift it out a month. And the aim here for this is to do these network upgrades twice a year. We would do one in January and one in July, or one in April and one in October. So once you schedule one, you would do one six months later. And of course, in this recommendation, you could always fight fires whenever and wherever they happen. If we have another situation like Constance and Opal or the Shanghai attacks, we can upgrade it on very short notice or we can cancel the fork and move it out on very short notice. If there's an emergency, those recommendations do not constrain us acting on those emergencies. But if it's not an emergency, the discipline that comes with a timeframe like this will be appreciated by the people who have to operate the Ethereum nodes that all of our networks depend on. So a synthesis from all of these four recommendations, we get to the train model. So rather than a model where we're all flying on an airplane and we're going for a specific flight at a specific time, instead it's like we're going to a train station and as the trains come and go, you come and go. Yeah, and there's four main ideas here. So the first is that the EAP should progress independently. Like we said before, you don't want to have all the EAPs moving in lockstep towards the various stages of the process. Instead, every EAP and working group moves at their own pace and whenever something is ready to ship, only then do you schedule it for an actual upgrade. The second point is that EAPs have a champion, a human being who the all core devs can contact and say, hey, is this EAP ready for this network upgrade? Should we move it to the next one? It's a single point of contact who can answer these questions and help move the EIP through the process. Maybe motivate the team to make deadlines and maybe just let all core devs know that we found an issue and we're not going to make this network upgrade. And the third idea is that whatever is done is whatever shifts, sorry. So this way, an upgrade consists of basically the EAPs that are in a spot where they're ready to go live. Anything that's still in progress gets moved to the next upgrade. Like we said, if there's any last minute issue, then you can obviously kick that EAP out to the next train as well. And the final recommendation is that we move to semi-annual network upgrades rather than the annual or less upgrades that we've been doing in the past. One of the big advantages that we get out of this is that when we tell an EIP that you're not ready for this network upgrade, the next upgrade is only six months down the road rather than a year or more down the road. So they know that the work that they put into will still show up in a reasonable timeframe rather than some unknown timeframe in the future. They'll reduce a lot of the anxiety when it comes to scheduling and moving stuff in and out of a network upgrade. And from this, we can make sure that what goes into the network upgrade is the good stuff. And like we said earlier, we didn't come up with all of these ideas. The community mostly did. So we wanted to make sure we linked back to them so anyone who's interested can go and dig a bit deeper into the original sources. Yeah, if you Google Alexiakunov, that's the article that you'll find. And so yeah, that's what we had. Thank you all for listening. And we have a couple of minutes for questions if anyone has any. Hey, thanks for the talk. I was just curious what's required to get miners on board with this like plan of doing regular hard forks for upgrades and are there any like difficulty adjustments that are happening with each of those hard forks? Who on board? Miners? Miners? I don't think it would change anything for miners. So the only, obviously like the difficulty bomb is one thing that comes up and that's usually solved through an EAP. So if we want to delay the difficulty bomb again, we simply choose whatever upgrade we wanna do that in and add that to that upgrade. So is the idea then that the difficulty bombs would be changed to line up with this schedule that you're proposing? To be determined. And historically for the past two upgrades, it has kicked it out one fork, but my understanding was that they were trying to kick it out more than one fork. It's just, we've had difficulties getting these network upgrades out. Thank you. And one of the other reasons for proposing the regular timeframe is so that the miners know, hey, Wednesday of a month is coming up. We should check Ethereum. Are we forking? Are we not? So the miners are another important member of this community that we need to take into consideration. Any other questions? Okay. Thanks everybody.