 Cool. Thanks everyone for joining today and this week It was pretty great to hear yesterday what everyone is using basu for today. I'm gonna walk through kind of a product focus update My name is Matt Nelson for those that don't know and those on the call I am a product manager for basu at consensus, but I also manage remote signing capabilities for us as well with what through signer So today kind of gonna walk through, you know, the emerging strategy Our focus at least for consensus so anything that I say today doesn't necessarily represent the entire basu projects But it does represent what consensus is focused on as far as maintaining the project and I'd also You know like to invite everyone to realize that anything that you see in this product update It can become a collaborative process. Please work with me to talk about requirements what you want to see out of the projects There's open governance. So it's not just me and CSI dictating everything We want to make sure that this is an opportunity to learn more and at the end of the presentation I will talk specifically about how you can get involved in contributions what the process looks like Both on the coding and governance side and I also have some suggestions about how we can Kind of open up the product kind of strategy for more groups. So I'll give it a quick agenda. We're gonna review the roadmap We're going to talk about basu with public and hybrid networks We're gonna talk about basu with layer twos And then I'm gonna use kind of those as a reasoning to explain our post merge architecture And where we see the client moving from that perspective And then we're gonna end with talking about contributing to base to and a little bit about core development for Ethereum as well Which colors a lot of what we do for about the rest of the road map and dictates probably half If not more of the work that we actually do on the client day to day So I'll kind of start with the little bit of an overview But why are we changing, you know the mission and why are we focusing on these private networks? And why are we kind of refining what basu is Intending to do as a client. So with the merge We opened up kind of the consensus layer and the execution layer of Ethereum and separated concerns for these clients So with layer one basu is focused on execution and that's really around pulling in data from the blockchain Executing it within the EVM and returning results The consensus of that data and the visibility of that data is handled kind of externally to the client at least in a main net context In a private network context basu handles all of the components consensus as well But in these public settings, it is a separation of concerns that I think it's going to color The architecture of basu going forward and of Ethereum and some of these blockchain related networks So basu has become a key part of the proof-of-stake validator stack and the execution consensus client combo on main net And we want to maintain compatibility with main net, right? Our intention is for basu to be a long-lived and valuable piece of infrastructure into the future of public networks For cryptocurrencies and blockchains in general And we're prioritizing this kind of shift to public networks in terms of the feature development that we're doing in consensus at this point So that I'll explain a little bit more of when I get to the kind of layer two perspective, but We're you know focusing on performance modularization in tune of the Like separation of concerns for architecture as I mentioned and resolving some residual tech tech that we have from supporting these numbers of use Cases within the same client. That's kind of where we are at right now. But why are we doing this, right? It's about making sure that we can ensure network participation for nodes that are running on basis So this means a number of different things, right? It means the day that you can operate basu in a private network context You can operate basu in a sidechain context. You can operate basu in a mining network context something like theorem classic and We're kind of continuing to develop the client towards a kind of multi chain world where we have EVM EVM like chains EVM compatible chains roll-ups Iber network all this stuff, right? We want to make sure that we have basu as a key indispensable piece of infrastructure for the world that is going more and more Multi-chain and enabling those connection points between these different chains By you know building around EVM standards token standards Smart contract standards and more so, you know, we have the familiar license with jaw or with um Apache 2 we have java a familiar programming language And you know a client you're all familiar with right? And now it's about kind of translating a lot of this to becoming the best and most Flexible infrastructure for institutions looking to participate and build blockchain networks That's kind of a refining our mission here into this Rub map that many of you probably haven't seen although it has been on the wiki for quite some time um with the nature of ethereum clients I tend not to Commit to more than two quarters at a time given our shifting priorities But you know, this is a kind of look at what I see for 2023 We have quarter one focusing on Shanghai The fork that will be delivered April 12th on mainnet and also focusing on performance So I've been touching on this for the last couple of days But we have a ton of performance related improvements that we put into the 23 one series alongside a host of stability and correctness improvements Refactoring portions of the code retooling rpc to be in line with more With other clients like geth. So we have very consistent results across clients And just working on performance performance performance. So Really this one was kind of spurned by that post merge Um, you know adoption of basu we went from around 1 of network nodes to around 10 and settled around eight or nine percent So we've had a lot of new users and an influx of folks working working on mainnet Exposed a number of performance challenges. So we've taken that feedback We focused this quarter on improving that performance As well as building the features that we need to be compatible with Shanghai And making sure that basu can continually go through the transition without issue So This quarter is kind of wrapping up. Well, we've wrapped up most of this work Like I said, Shanghai is being delivered. We had a flawless early work on basu, which was fantastic across all of our client combinations So a lot of this might be Not important or kind of confusing information for you all But needless to say we're keeping up with mainnet and that is kind of the primary focus for the consensus contributors right now In the kind of second quarter and latter half of the year We're going to be shifting our focus a little bit to roll-ups The reason being is that again the roll-up ecosystem is kind of becoming the focal point of private networks for a number of reasons That i'll get to later in the presentation Um, and i'm also kind of revisit this slide at the end to bring back a little bit of context But we're focusing on on making basu into a roll-ups engine And taking that kind of separation of concerns I mentioned earlier where the execution clients are focused on things Kind of around evm execution and other components and basically outsourcing that consensus layer Functionality and it's the same with a roll-up. Essentially. We're outsourcing consensus to this other layer We're keeping some of the components of execution and we are retooling basu to be really suited for that format So in q3, we're looking to deliver things around optimistic basu packages zero knowledge basu roll-up packages What this really means is client diversity on the the roll-up layer So if you're using optimistic basu if you're using optimism, for example, having the code be Be compatible with that so they're not just using a fork of geth They're using multiple clients creating more robustness and more opportunity on layer two But also creating kind of a familiarity to migration path for networks that are exploring l2s and roll-ups Also looking to target more kind of neatly packaged mainnet clients, so there's As I mentioned the execution layer and the consensus layer Some of our focus is cleaning up some of that connection points and instead of having kind of these two separate kind of unique components Packaging some of those together to make it a lot easier to get on mainnet without the headaches of overhead around You know management of infrastructure and some of that stuff q3 we're also focusing on the canteen fork um I can get into detail, but primarily it's focused on something called eip 4844 Which is the first of a set of work around shard. So if you've heard about sharding and data availability improvements We are working kind of feverishly to get those improvements on mainnet Which will bring the cost of roll-ups and other components down a lot And there's also a number of evm changes Beyond that A little more unknown, but we're focusing still on kind of working the code base into that kind of what you know today Is these multi use case separation of use case? Components we're taking that and blowing it up a little bit to be a lot more easy for us to manage as maintainers. So that's Doing things with modularity making sure that we have the ability to support these use cases Antoine talked a little bit about protocol schedule cleaning a lot of that tech that up around how we Manage the protocol schedule how we manage these different paths Within the code base to serve things like private networks public networks roll-ups and more We haven't really had time to kind of go back and evaluate this format that used to be a lot simpler Because there was only a handful of schedules There are now a lot of weird quirks, especially post merge that we're looking to kind of clean up And my hope is that this will also Bring the kind of ability for us to become compatible with more chains on more formats in layer one So if you've heard of chains like noses chain and others, um, you know polygon some of these other things Finding those points where we could potentially become a piece of Hello Yes, let me just ping them in the room looks like there's some issue with Hey, can you hear me? Hey matt. Sorry. We it looked like it got cut a little but we can hear you now Okay, cool. I'll continue on just feel free to stop me if there are any questions or If there are audio issues um Okay, yeah, so focusing on multi-chain basu And that doesn't necessarily mean that one node will run all of these different chains But it'll be simplifying the process to use basu to connect to multiple networks We're also looking to provide more tools and specific features for infrastructure providers. Um Really, that's around packaging improvements potentially Operating some of the existing stuff that we already have potentially migration scripts for certain kinds of chains to move their data around um And you know, one of our final focuses is making basu an ethereum reference client Um, and what that means is that you know, we we have things like the ethereum yellow paper We have all these eips Making sure that basu's actual implementation is accurate to the t across all of those so that when you're doing research with basu on things like mainnet specific topics cryptography, etc that we are have the ability to be an accurate and Basically competitive client against something like geth, which is the gold standard for that kind of research Uh beyond that tons of open work as usual in the ethereum ecosystem We have things like in protocol deposits, proposal builder separation, vertical trees, history and state expiry More to come on that. Uh, nothing too crazy right now. We are a little bit Um, you kind of wait till those things boil down in the ethereum research space But there's a ton of open topics that we're looking to get at So i'm going to pause here because i'm sure we have questions and chat and in the room Um, and as I mentioned, i'll explain kind of the why of this roadmap and then we can return to this slide in the next Hand full of slides. Um, but i'll go there first So we are not dropping any existing features besides a handful that we have already put deprecation notices out for Including ibft one some of the go quorum crap attention at all We know you're building these networks and you need to continue to be able to operate these networks Uh consensus will just not be building essentially new features On this capacity. However, we are looking to onboard contributors We have web three labs connor spoke yesterday about how they'll be getting involved in handling a lot of these specific feature requests and bug requests So we still maintain critical bugs uh that are found on the private network side depending on the severity We might not be as quick depending on what those bugs are in the version that those bugs are present in However, we do not intend to cut any of the existing support besides what I just mentioned at this time And we are actively looking to onboard new contributor Up to speed on what it means to propose new private features to maintain those features Whether it's using the plugin api or the actual code base So the short answer is no we're not deprecating any Like we don't plan to deprecate any of those new features It's also an open source project if you wish to maintain them you may just as well But consensus is focused on these items and public networks at this time Details but other users and other operators do not right So in a in an enterprise setting You still sort of like need that level of transparency for all the purposes for bigger purposes So with the zk rollups Is the intent to move To basically make all of the transaction Is free Getting you know the user actually creates the zk proof And the operator would not be able to No, so I think in the enterprise context the the word that we like to use is a single sequencer rollup Where there's essentially one operator that sees the majority and or all of the details And the proving is happening kind of on the latter half of that So you don't necessarily need to apply that kind of privacy protection The what I'm describing in this road map here is for public facing zk rollups So where they will need to split and basically privatize the account state and prove it Within the infrastructure, but not necessarily at the individual user level I have a specific slide that goes through what that privacy means later on but see your answer Both flavors are kind of possible. Um, so there's the single sequence the rollups might become more you know of the enterprise kind of Context for what the evolution for some of these Networks might be if there's not enough scale if there's More granular kind of privacy Stuff, but I'll get to that in a later slide But at the end of the day a single sequence a rollup is where? One and they have visibility Come answer it with that Yes, sorry the question Yeah, the question was around um Around rollups specifically and does zk rollup mean that there would have to be faked in privacy Even if you're just a single organization or enterprise using it and the answer is it is flexible. It really depends Yeah, so the question was around um account abstraction standards specifically eip slash erc 4337 And how it pertains to basu So as far as basu is concerned since it isn't erc, which is a set of smart contract standards There are actually no protocol level changes What we will have to implement however is rpcs that enable us to Interface better with these bundlers and with the Smart contract wallet ecosystem that will be created. We do plan on supporting that as soon as possible However, the rpc standards are being developed literally as we speak and they're not ready yet But we will be building them into basu and they will be aligned with the l2 ecosystem So we're in contact with optimism arbitram You know zk sync all these other folks the networks that have implemented a kind of traction We've all kind of gotten together and discussed what this looks like and the output will be A set of rpcs that we will be implementing in basu Yeah, so we are still taking some inbound requests for things like fixes for tisera Our plan is again to kind of afford that to other organizations. Um, including what three labs consensus again our Stated approaches that we will not be deprecating these features But we are actively looking to have other contributors maintain them and build them out with additional components if so The question was about tisera and the commitment As I mentioned earlier in the call This roadmap is indicative of what consensus will be working on as far as basu And we are open to any and all feedback Yeah, that really depends the question was what is the target tps? Infinality of roll-ups So it depends on whether using an optimistic roll-up or a zk roll-up or a proof of authority roll-up The tps number it honestly depends on how many sequencers and nodes you have You can kind of just keep layering them on top of each other in theory for infinite scale Finality is totally driven by what type of roll-up you're using an optimistic roll-up typically has a finality Time of a handful of minutes, but there's also a week two week long challenge period where you can essentially be Try to prove fraud in the network or prove That there is specific kinds of activity going on and they use game theoretical incentives to Incentivize those I don't think optimistic roll-ups will see much use in an enterprise context in terms of internal use cases but I do When I get to kind of the layer three app chains discussion portion of this I think that we will potentially see things built on top of those roll-ups You think of coinbase base The roll-up that they just announced that's built on the op stack, which will take advantage of optimism's fraud proofs and optimism's Um Basically token incentives to secure their network, which is secured on top of ethereum But in the enterprise context the scale I think You know, it again, it depends on the deployment but you know, you worked with the uh, the proof of authority roll-up numbers before that we did and it's Five to ten X scale on the layer one It really depends. Um, so I'm hesitant to give specific numbers, but any other questions Not from us. No. So the question was around incorporating zero knowledge Components with anything that comes out of harry's No from our side. No, but again open The zero knowledge proofs at the client protocol level but to support the other environments that kind of touch on zero knowledge proofs I'm going to just keep moving on my slides because I think a lot of the questions will become more self-explanatory And then we'll come back to the roadmap um, I'm getting ahead now so I'm an update for folks in the room on basu and public and hybrid networks There's a lot of interest around public networks in the news not necessarily ethereum mainnet, but with chains like polygon some ethereum mainnet stuff As well as hybrid chains using basu for cbdc building on the standards and using things like bridges to deploy back to mainnet but I As was touched on yesterday, there's a ton of stuff in the news. I don't need to bore you with the links But basu is a complete mainnet client. I know some of you are using basu for mainnet today But as I mentioned, we have around seven and a half fluctuate between seven and a half to ten percent of network share at any given time There are four main clients in the landscape for ethereum execution layer clients And basically these execution layer clients have really strong voices around what happens within the ethereum protocol Whether that's changes to the evm Whether that's changes to the format of certain transaction types and things all of this stuff that trickles down to private networks Starts with these four teams. Um for the most part plus ethereum researchers and others However, it is an open and collaborative process These are kind of the primary funnel for these changes, but they're not the only conduit and i'll Talk a lot about core development later But we are one of those four clients and that gives this group of people a voice to influence ethereum mainnet in a number of ways Um, we participate in steer network upgrades on ethereum mainnet We've been historically used for private networks from basu, but we went from around one percent network share to again around like eight to ten percent And because that's all because of the bonds I storage format. It's our big differentiator It makes a mainnet node two thirds the size of any other client in many instances which is Very valuable for people who are running on hardware that costs a lot of money or are running a hardware validator in their garage And need load requirements Um, not many other clients were built with this kind of enterprise focus in mind Or any of the private network features the only other client that implements click Our gaff sort of and they're dropping support and nevermind So if you want to use private network consensus, you're basically using basu But again, that means that we were built from the ground up with these features in mind And a lot of the stuff around infrastructure support that we have built in is kind of Born out of that process Some more information here on the right hand side You know, this is a breakdown of all the nodes in the the world in the public So around half of the ethereum nodes on mainnet are run out of the united states Typically those are run in data centers If you look at this down here 66.5 of nodes are hosted. That's not necessarily great. It's not the end of the world But the good number is that 33 of the nodes around a third of the network is run by Residential internet providers, which means people stay from home. This is great for the decentralization of the network Because you don't want in theory amazon or google to be able to shut off access to those things But again, the united states and europe predominantly Are the major folks hosting nodes in the ethereum network with some asian representation as well So we had a question from Antoine That's a good question I know peter put in a pull request on the geth repo to delete all the click code I don't know if that pr has been merged But so just for the for some elucidation geth invented click consensus as a mechanism to Test local networks and to deploy kind of sort of enterprisey local networks But geth is on a complete killing spree with dropping features things like fastsync And some of their older storage formats and potentially even the archive node So there's a lot of things shifting on ethereum mainnet with the launch of the beacon chain And we'll have to see how it goes so again people build on basu because We support ethereum standards as well as private networks and that was not done In a vacuum that was a very deliberate decision to support both hybrid use cases and kind of the evolution of private use cases to public over time So a lot of you have heard about the palm network They take advantage of ibft2 poi consensus to have gas basically free network Initially the draw was to have carbon Reduction versus mainnet before the merge Their kind of selling point was that they can have very fast transaction speed low cost for minting NFTs And no carbon footprint versus the very wasteful group of work Um, if you look at this kind of right hand side here the palm network part on the bottom That's the really interesting thing. That's just a typical base basu ibft2 network That has only a handful of components that make it work with mainnet. They have a very, you know They have a bridge of a layer and then integration points of off-chain storage But in reality that the the magic happens at that bridge layer and it You know works with the kind of scalable basu network in a way that allows users to purchase and sell NFTs on both mainnet and the palm network The reason being is that they don't implement a custom NFT standard. They support course ERC 71 and deploy um Kind of these like sidechain NFTs that can then be spot sold and moved if need be to wherever they'd like to go So, you know, I I like the palm example because their Their plan longer term potentially is to migrate to this kind of public infrastructure But it shows that evolution, right? We have an ibft2 network that becomes a sidechain via a bridge Which could in theory be migrated to public infrastructure on l2 or l3 with a similar process but none of the kind of Architecture underlying it has to change too much the infrastructure can say the same the basu nodes can stay the same And ideally if we get to that kind of vision that i'm laying out around layer 2s and rollups They wouldn't have to change that that infrastructure point as well, right? They would be running troopers within those networks or potentially just getting rid of infrastructure altogether and deploying their smart contracts On layer 2 and taking advantage of publicly available bridges honestly, I don't really like this slide, but What does ethereum participation mean for enterprise on public networks? You can get paid you can stake eth and start playing around with public network endpoints and running nodes And get returns of varying perspectives We think that the rewards are their rewards hover from anywhere between 6 to 10 percent depending on mev and other things Not really important to work about But at this point, I think the proof of stake ethereum, you know, it's shown that it's sustainable With all these changes we're making it's very scalable in the future I would say give that one 6 to 12 months before I kind of put a stamp on it and say, okay ethereum is scaled um And again, you're already building these applications that are running on private networks once we can get over the regulatory hurdles The network will be ready Okay, the bulk of what i'm going to talk about is this kind of layer 2 ecosystem But honestly once I get to the end we can talk about pretty much anything you'd like to I only prepared slides for half of the time so Institutions and d5 need more than layer 1 throughput and privacy are the big ones for layer 2s throughput meaning scale we Talked at length yesterday about how tps numbers while they may be misleading. They do tap out pretty quickly on layer 1 and there's some Quarks that you can use or take advantage of and tactics and all that good stuff But you'll eventually hit a kind of hard wall on layer 1 ethereum. That's either bottlenecked at execution speed or you know Kind of a number of factors based on consensus However, we need more than that often enough for institutions Auditability and privacy, you know that there's some uniqueness of layer 1 that allows you to try to handle both With things like to sara and things like these private network transactions A lot of the layer 2s get that stuff for free based on the technology that they're using Whether it's zero knowledge proofs or semi private kind of transfers It it really depends compatibility with layer 1 smart contracts Composable platforms all the good stuff token transfers are expensive. They're not private and there's Often not compatibility with the ecosystem if you take advantage of privacy features on layer 1 So if you're using to sara in a private network You may not be able to port those assets In a way that makes sense to where they need to go if you're interfacing back with public networks So how do we kind of get all these different components? Into one network And the answer is it's kind of layer 2 solutions, right? We have a variety of different solutions here We are not just looking at kind of roll-ups to scale ethereum. There's also evm sidechains And data availability sampling improvements all formerly known as sharding That we you know are are looking at as the official scaling roadmap of ethereum layer 1 So What is a roll-up here? I know this is potentially self-explanatory for some folks in the room, but at the same time Uh, a roll-up or if you've heard layer 2 is a another network that takes advantage of the security of the underlying layer 1 In this case ethereum mainnet In many cases ethereum mainnet um To batch execution off-chain and then post the results of that execution onto a secured layer 1 Where it can be retrieved later for proving fraud purposes and for verification purposes So in reality if you think that I have you know a throughput that's limited on layer 1 in terms of Security in terms of what I need to store on the blockchain in terms of the budget to secure that network Which I think at this point there's 600 000 validators ish times 32 eth is a lot of money securing that layer 1 It's a lot harder to kind of build that security budget in elsewhere There is things like group of authority consensus where the budget is not secured by the Monetary value of the network, but instead is secured by the quality of the validators But in this case, let's presume that we're talking about ethereum base layer 1 which is frankly You know a 100 uptime network with an insanely high amount of money required to attack it at any given second So you have the base settlement layer The most decentralized and resilient network and then as you go up in the layers You start to centralize a little bit more and make trade-offs in terms of centralization That you gain back in terms of scale So speed of execution if I make assumptions about the centralization of what I'm executing batching together my transactions I can take the security concerns Out of that centralization and take advantage of layer 1 to bring it back into play. Does that make sense? I just want to make sure that I'm not confusing folks So, yeah, yep Yeah, so the consensus algorithm is not there is no consensus essentially because you're using a sequencer to choose the order of transactions that means that they It's a centralized entity that says this is the order of the transactions And these are the proofs that this is the order You take those proofs and you put those on layer 1 that can be checked later To ensure things like double spend don't happen and to ensure that consensus has reached among kind of What happens on the layer 2 But in reality, there's not a distribution of nodes that needs consensus because this l2 is much more centralized So they yeah, you defer to the roll-ups consensus, which is basically just I choose how to sequence transactions and I do it In many cases we're moving towards a world of kind of multiprover multi, you know, decentralized roll-ups that talk to each other Um, but in reality those consensus mechanisms still are based on kind of the ethereum layer 1 chain Securing upwards to avoid that centralization problem where the users are challenging the fraud proofs or where they're using zero knowledge proofs To ensure that the ordering of transactions and the inputs and outputs are correct It can yeah So it on so the way that this kind of actual movement between layer 1 and layer 2 happens is via smart contracts on layer 1 So to Alton's point there are contracts on the layer 1 that manage the funds that are existing in its entirety on layer 2 But they're making sure that they're secured via that layer 1 settlement layer as I mentioned so when users deposit funds into the layer 1 smart contract they appear on the layer 2 As this users funds so these contracts are often enormous in complexity and size And they manage funds across those two layers and they allow you to basically debit and credit specific addresses specific accounts But the execution of what's happening in those smart contracts on layer 2 Is not captured in the layer 1 smart contracts only the kind of state as it says like as it comes out Does that make sense? Yeah, absolutely. So I um, might confuse people and skip way ahead So as the merge created that separation of concerns like I said with the execution client and the consensus client Well, what we use to take advantage of the roll-ups consensus is this engine api which sits in the middle So basu is now set up to have consensus driven entirely externally As far as a main net or public network context Which means basu is handed down from on high What fork it should follow what blocks should be produced? What should be kind of You know it builds its own blocks, but it's being told how to be directed So now swap out this consensus client box with a roll-up sequencer That uses the engine api literally to direct the execution environment on what to do right next to it But you would have a separate process from layer 1. I don't Know how it's going to shake out We don't intend to have them on the same machine purely because The requirements of roll-ups from an execution perspective are often encompassing of the resources of the machine But I'd love to get to a point where you can basically Kind of toggle these behaviors or in theory have managed state in multiple components Which means basu can talk to them at the same time I'm not necessarily convinced that Java will be performing enough to do some of those things Without being costly, but we like where we're thinking these through actively right now these questions Yeah, please No, so those proofs are handled by another library Narc is one example that consensus develops AVM from the consensus developing where we we use basu to trace the blocks To sequence the blocks to produce the blocks, but we outsource the proving of the zero knowledge proofs like we we outsource the proving of what happened in the evm and we outsource the evm itself to a zero knowledge evm, which is essentially translating the op codes directly into proofs themselves so That's an even weirder example where basu becomes essentially even less of this execution client box than it is in this picture But there's so many different formats for this that what i'm trying to get to is we're you know We're using kind of a modular architecture to be able to pick and choose which building blocks we want To serve that use case in a way that's efficient and doesn't it has maximum reuse of components um I am skipping way ahead. I'm going to get to Why we're using layer two I'm actually one slide ahead No, I'm not okay yes so Now, yeah, so we're building that as part of our roadmap from q2 in q2 and q3 So our hope is that eventually you can pick and choose nodes on multiple layer twos like you do today On layer one. So you say, oh, I could run get nethermind or basu or whatever on layer two We want that to be on you know, I'm running on optimism. I want to be able to run get their basu But as it stands right now, we're doing that work today So we got to build components around things like the sequencer You know, like I mentioned the rpc tracing some of the proving components We're building that all into the client and making it kind of accessible But it's going to take time to make sure that we can do that in a generalizable way Because right now it's very much tailor-built to certain use cases And we want input on Kind of what this looks like But the reason i'm going to skip this slide the reason we're doing all this is Yeah, yeah, go ahead So we're building some of the components of basu Into the zk avm the plan is to roll back that work into the main code base But for now it's more of a we're kind of building it as a mvp internally for to support this network But again over the next couple of quarters My goal is to roll back as much of that functionality as we can into the mainline client in a way that is generalizable to roll-ups Multiple types multiple environments. What have you? Okay, um This picture that you're seeing is kind of where I You know me and many other folks think that a lot of this enterprise stuff is going to land where we have a Multi-prover environment on layer two And a single-prover environment on layer three. So what I just mentioned earlier about Kind of these single-prover roll-ups where i'm an enterprise I want to inherit the security of public networks meaning all the security budget I mentioned on layer two and on layer one so billions and billions and billions of dollars security budget To attack that network, but I want to bring it up to what i'm doing And layer kind of three allows us to get there because you could have the privacy of the network by saying I'm a single-prover roll-up. No one else can see what i'm doing. I don't care All I'm going to do is post outputs of what i'm doing on chain to ensure that the data and the state are consistent And that as I go forward and transact and do things That data trickles down and I don't have to worry about securing it with my basu l1 because it's being secured by mainnet And the data is completely opaque to anyone that's looking at Gross oversimplification there, but the you know the roll-up of roll-ups kind of Chain worlds is absolutely coming. Um, you know, here we have three examples scroll consensus roll-up and polygon We are already in talks with those networks to basically create this multi-prover environment where these roll-ups can talk to each other Move funds very simply and they all inherit the kind of security architecture of ethereum layer one So this kind of big second circle is that layer two environment and these are all the individual kind of like l3 if you've heard the term app chain Or l3 they're kind of synonymous these are essentially You know what you're building today already with these basu private networks strip out some of the components like ticera add in some functionality around privacy and You know data availability And you have that kind of l3 world. So I know that's going to spur in some questions. Let's dive in Is there anyone else? Yeah So the question was if we add more layers, do we lose composibility the answer is no because all of these are being built around the evm So the standards don't change as you go up and down the layers In fact, it becomes relatively more and more composable because when you go to something like an l3 I don't have to worry about the infrastructure that's built on the l2 I can customize what i'm building essentially to Fit my model of the world And all I care about is that the data that i'm submitting with these blocks with these transactions trickles down an appropriate way to l1 Does that make sense? Follow up question, please Okay Only if you're going from l3 to l3 So that's not typically the example you would need something like a bridge However, you could go down to l2 and go up to another l3 In this kind of multi-prover environment The goal is to avoid bridges as much as possible. These bridges are bad. They're very prone to hacks They're big honey pots, but in this scenario with a roll-up you can kind of Move the state in a sort of different way and go down and back up as needed In reality all the state ends up here But in a much much much compressed format And then as you go up the layers the state separates into its individual components But it can be private and it can you know, you can do these kind of like big transfers of tokens And if you have the data availability that you need you can even include off-chain things in your app chain The other parts of the network don't necessarily care if you bs yourself on your app chain They only care that the security is inherited up and down. Does that make sense? So I can have garbage in garbage out problems on layer three and still put garbage there But no one really cares because it doesn't impact this environment It's just that my garbage is being secured by these other layers economic incentives I don't like to use that term garbage, but it's basically true, right? You know, these app chains are designed to Be an evolution of these private networks where I could have my cake and eat it to a scale because I can run kind of an app chain inherit all the security going down but also I can have the kind of Scale that comes from these l2 setups by virtue of the fact that I'm off Offloading execution to potentially multiple chains simultaneously So We have a question Yeah, so if you've heard the term sovereign roll-up, that's also kind of similar. The question was is this a formalization of a side chain in an environment and the answer is yes Because it removes that that bridge component which makes a private network into a side chain And the reason it's a side chain is because you're not inheriting a security of layer one ethereum You're only moving things back and forth, which means the security of what I Generate on the side chain is only as good as the side chain Whereas in this example the security of what is generated on the layer three is proven all the way down to ethereum and backup Yeah, the side chain uses its own consensus algorithm whereas in this case, you're trusting the the prover So you have to trust again If you were a single prover enterprise, you have to trust that you're not bsing yourself And you have to trust that the l2 has strong enough economic incentives to secure what you're doing And you have to trust that ethereum has strong enough economic incentives to Prove what the l2 is doing So these ones are pretty much borne out where Like I mentioned it costs like billions millions of dollars per minute to attack ethereum You have 600 000 nodes that you need to collude with a good portion of them There's strong robustness here This one is a little bit more dubious at the at the point in time of right now There are a lot of if you go on to I think it's l2 beef They describe the different fraud proofs and risk profiles of each of the l2's Right now arbitram is the closest to The most robust But there's a ton of details there explaining what are the trade-offs that are being made right now What are those organizations doing about the trade-offs meaning? How are we getting to full fraud proofs? to full like composability All that good stuff like it they're they're being developed I'm not implying that you should go out and throw away everything you've done on an l1 Or on a basu network and just go here But I'm saying this is the direction of travel for both the client that we're building and using And for the network as a whole The network of networks as a whole Yeah, so the question was can I explain kind of the personas and the enterprise use case around why we would separate out these layers And why users would use this type of scenario So, uh It's a great question. I think that what we see is the proliferation of kind of communities or like business environments where if i'm a Person that's playing a certain video game and they have like an in-game economy I'm on a layer three I'm on an app chain because i'm working specifically within that game But maybe I want to play a new game tomorrow So I have to I have assets that are composable because they're built on a common l2 infrastructure I can move my asset directly to the other l3 without having to bridge Because in this multi-prover environment, it's secured by the l2 So which means that any other app chain that's connected to that kind of multi-prover world will will inherit the same security guarantee Which means the token or the asset can move seamlessly from one or the other without a bridged environment Which means the user can switch what they're they can move their assets quickly without much fees and they can You know play a new game with the same assets In a business context it could be that I have kind of an intranet which does You know transactions and settlements within my own kind of bank And I have another bank that's running on the same kind of l2 multi-prover environment They have their app chain l3 that does their internal processes. What have you if we need to transact We don't necessarily have to stay in our l3 world because we're being You know secured on this middle layer So we can transact here without again having to bridge and having to worry about the security guarantees Because my assets are already secured by this buffer So it allows for these kinds of Like big kind of movements of assets the big kind of connection tissue without having that bridge Conundrum and problems and it allows us to also say okay today I want a new l3 and I don't have to worry about you know the kind of Standing up of all that security infrastructure. I'm spinning up an app purpose chain That's why they're called app chains. I'm trying to do a new financial instrument I'm trying to provide you know instead of trades not I want to do tokenization of stocks I spin up a chain specifically for that it can communicate with other portions of the l3 chain as I see fit And it can also interact with other App chains via that kind of intermediary layer of l2 Was that more confusing or did I So this is the part where I kind of tip my own hand is that I don't want you using basu in this environment The whole point of public infrastructure is that you just kind of use the infrastructure And you don't worry about the client that's running it um As the person who wants people to use my product that's you know, not necessarily what's Good for me, but in the longer term I you know as a basu product manager, I see myself servicing more networks than particular companies Again, the whole point of why we started building on basu for enterprises because we want people building around the evm The rest will fall into place as we converge on these networks That's the whole that was the whole like gotcha in this whole thing. So Yep, it's the same process as a roll up would take on l2 to l1 So the question was how do transactions basically be sequenced and batched on l3 and it's the same as l2 to l1 Where we batch and process transactions via that single prover sequencer and then they You know are posted to l2 as kind of these data blobs and like guarantees that we make when we post from l2 to l1 Yeah, so I mean it depends on the type of roll up you're using But in zero knowledge proofs, it's validating the state transition that occurs on l3 And then it's posting that result to l2 As a transaction. So if you think of the same exact roll up that occurs from l2 to l1 It's done in the same way. It's validating state transitions And you're inheriting a security scale guarantees of layer two, which is a big hand wavy way of saying We do things faster because an l2 has a more centralized sequencer However, l2 is still faster than l1 because l1 has an entirely decentralized sequencer. So you Can see how they connect together But you're again, you're offloading Multiple like so imagine the l2 has every state transition that ever occurs on this big l2 network Has to be batched and rolled up and put on l1 as you move to these app chain models. I'm only concerned with that l3's Batch and roll up of transactions, which makes things easier for them to prove and verify And then they post those results on the l2 which are rolled up again Does that make any does that make it more confusing? They don't necessarily have to Because you all you care about is that I'm recording the state transition On l3 elsewhere if I record my state transition on the block like on the chain that it's happening itself Then the only veracity is that chain like it's only using that source of truth Whereas if I take the results of the state transitions and I cascade them down I'm securing that source of truth in multiple locations And I'm not exposing the data because like you said you could use something like a zero knowledge proof or you could Otherwise opt to escape the transaction information But you what is important is that the output is stored on a chain Where you trust the security guarantees for a number of reasons and again the reason you would trust l2 Is because it's secured by ethereum layer 1 which is secured by the economic incentives of the network So those are big security You know things to wrap your head around. It's not just as simple There's off-chain data. There's identity data. There's all this other stuff But when you come up to layer three Since you're operating kind of within your own environment You have much more control over what that granularity and privacy looks like Because all you're putting on l2 is a hash essentially of that state transition that shows that it's valid in perpetuity So l2 doesn't get to see what I'm doing up here. They just say oh And a happened and then now b is the state They don't know what that state transition was But they know what the state was prior and the state was after in an obfuscated format And it's the same when you go from l2 to l1 l1 doesn't is not necessarily aware of the roll-up state in the same way It has the roll-up smart contract which keeps kind of those checks and balances But it's not worried about the execution of those smart contracts on layer two. It's not necessarily worried about specific identity on layer two It's like you said it's kind of that credit and debit system where they're just keeping track of the state and the state transitions But not necessarily what is occurring within those transitions that happens on the layer that originated It's all state up and down. Yeah Some of that state just happens to represent monetary value. Yeah Yeah, yeah, yeah It can be though about privacy. It depends The balance would have to remain the same like you can't create and destroy value on layer one Like you have to Yeah, it's yeah Sorry, there's a lot of questions. So we're going to go kind of in order. Um, the previous question was about mostly clarifying what I was discussing and How these inputs and outputs are recorded at each basically they cascade up and down. That's why they're called roll-ups Um, I don't know how to simplify that more. Um heart had a question in the back or a comment yeah, so This I think is a good example on screen right now I'm taking imagine. I have a block space of two blocks on layer one And I get twice as much block space on l2 But at the same time All of these transitions can be captured in between two blocks on l1 So I have a slot 12 seconds long an ethereum magnet I do a whole bunch of stuff on the roll-up because i'm running a super fast sequencer And it can work faster than the nodes in the ethereum network camp So I get four blocks which are filled with you know, tons and tons of little transactions I Basically hash mercilize all this stuff and I put it back on layer one and then when I get to the next block The roll-up will reference The change in state that has occurred here as it's kind of canonical states And the canonical state will drive all the kind of continuation of that roll-up And if you think of this one it's the same with layer three it comes up even more I do even more processing between the space of those two blocks or even in one transaction on layer two They can't escape back down between that kind of 12 second block time So a roll-up is called as such because I roll up a whole batch of transactions Push them back down to the canonical chain in this case ethereum l1 that canonical chain can be something else There doesn't have to be ethereum l1, but in all of the examples I've discussed today Ethnomical chain is this And again, you're you're you're putting money here, which goes like to your point you have to get assets somewhere. So I have 20 usdc on layer one I now have 20 usdc in the smart contract for that roll-up on layer one, which is instantaneously available on l2 It's not based on kind of a bridge. It's a smart contract guarantee So the l2 harsh is that smart contract it says all these things have happened And then at the output of this block the fourth block I make the state transition to the smart contract on l1 But I only I don't care about the intermediary transactions per se. That's why they're all they're all rolled up together So I take the first input and the output and I update what's happening on the canonical chain by virtue of That process Yeah, so as far as like the native currency so eth that is the case You can mint like for example, you think of like certain tokens like roll up tokens like optimism like the op token So they they have to represent it in state, but you can kind of mint burn Interesting ways, but if you're talking about like native denominated currency You have to come from l1 and so that's why the erc20 tokens have to like move around But for example, if I'm on layer two and I buy 20 usdc Someone has a process behind the scenes to perform that swap on my behalf essentially because you you have to purchase from Either an exchange or from, you know a decentralized exchange in which case those liquidity pools exist on both layers Sorry, like directly to the l2 Yeah, that's yeah, that is how it works. So that's when you bridge to l2 You're interacting directly with the contracts typically you're debiting your account balance on l1 and you're crediting it on l2 What's that of the of the l2 and of the asset, right? So like I have had a billion usdc over here forever on l1 I can move pieces of it in any at any perspective because the contract is deployed on l1 and it lives there for ever Or until it's upgraded So you have yes, you have to trust the l1 smart contracts However, there is all the the game theoretical incentives are usually what are used to secure that contract over the actual mechanics of the solidity which is a whole nother box of questions Back in the back As far as like base who is concerned we will be supporting the infrastructure as far as consensus is business goals Like this is the picture we see So, yeah, I mean you'll need you'll need Anything that happens here looks pretty much the same here just micro scale not micro but smaller scale, right? So any components that we build for l2 will be reusable up here It's the same exact format. It's just a recursive kind of link And the the complexity is in the smart contracts. It's not in the infrastructure code So this is where we start to get into the question was around like like zero gas chains and things In relation to this picture Unfortunately account abstraction is kind of the direction of travel on that where you have things like paymasters subsidized network transactions free gas essentially There's also Work being done in the protocol around multi-dimensional gas Where I can pay for gas and kind of made of denominated currency Similar to what you have on avalanche with their subnets where you can pay For the l3's activity basically with a new type of token Which would allow a business to create like McDonald's coin and everything that happens in this l3 uses McDonald's coin But it pays for its security budget And maybe polygon token and mattock or like something else depending on the layer two So we're not quite there yet. It's all typically this picture today is all denominated All the way down Once the account abstraction gets into play that 100 changes because you'll handle those kind of payments at the wallet level And then the paymasters will feel any so like i'm paying on behalf of a user who uses gives me mcdonald's coin And then I pay the network it That clashes a little bit with what I said about multi-dimensional gas which opens it at full full levels of abstraction the wallet and the protocol level It's an insane amount of complexity that frankly is still being worked out in real time And I welcome your contributions on those discussions Because of the security budget That's the only reason so that's the difference. I think that people will will struggle to understand between public and private networks Is that you're not you're no longer paying for infrastructure. You're paying for security budget So you have to pay to secure this all the way down in terms of fees But in reality, you could choose to run the infrastructure yourself But in order to inherit this you still need to pay And this is cheap right this is getting really cheap Like transacting on l2 is fractions and fractions of a cent So if you roll up a ton of transactions over a year and post only one on l2 you're paying like a cent for transaction So yes for player security. It's a new paradigm. Unfortunately. Am I did that answer your question? I don't think stake is the right word because again, you're not necessarily securing with the inherent stake of What you're building You are Yes, but Frankly, this picture doesn't exist right now There is no there is no multi-prover beauty world that we live in yet These organizations and you know optimism ever trip. We're trying to build that But again, I think this is trying to paint the picture of the next two three four five years Yeah, absolutely And that's kind of the beauty of the layer three is that you can pick all those details yourself As long as you get it onto the layer two chain. It's a great question um The way that I'm envisioning layer two the question was if for example, we Have compliance or other requirements that prevent us from using layer two. How do we take advantage of a system like this? um It's it's a good question that I think is the same question. We're having of why can't we use main night l1 right now Once we sort that out. I think that those again, it will cascade upwards of all these networks for a number of reasons But at the same time, um Maybe I don't have a great answer right now Meaning what like, uh, yeah, yeah Yeah Well, the data privacy comes from the fact that you're you're not actually storing the data on l2 So if you have an l3 That you operate in you have data visibility I only like I said, I'm only putting the state transitions on l2 I'm not actually putting any of the data itself on l2 you'd have you'd have to you can't bridge to another network without exposing what you're bridging Well, so ckp comes into play there because you can make assertions about the data and move it Uh, that's a great question. I'll have to think on that one the like with my kind of cobbled together answer would be that Yeah, you wouldn't be able to go to another single purpose chain necessarily without exposing what you're doing but you can I mean, so zero knowledge proofs to your point. They have identity privacy and data privacy. It's not one or the other necessarily um You make assertions about the data and that Like the account storage and what's sitting in it as well as smart contract inputs and outputs And you don't have to expose any of the underlying Reasoning. Okay, this is a good time to go to this slide Yes, uh, the answer is click fast sync and archive notes. Those are not deprecated and get quite yet But they there's stated plans to do that Yeah, as I mentioned earlier in the present the question was about Supported private features and support of those features going forward as I mentioned earlier in the call We currently have no plans like consensus will not be just ripping things out of the code base We only have plans to deprecate what I mentioned earlier, which was go for incompatible privacy modes and ivfc one Um, we are working with partners like web3 labs to continue to provide product level support for private networks Um, eventually our hope is to hand off all of these features and they can be maintained However, they are maintained It is an open source project consensus will not be maintaining them into the future But we will not be actively breaking or deprecating anything that I haven't mentioned Um, as for a specific list I can get back to you on that. I probably should work on that um But yeah as it stands we again, we're part of this presentation is to lay out kind of what is happening But the latter The part that I should really be getting to is how do you get involved in steering this roadmap? I don't want to be the only person telling you what I think the future of ethereum is you all should be building it In relight like in alignment with whatever works for your organization Um And you know a lot of that will trickle back into the main code base Um, but before I jump at risk of jumping ahead again Zero knowledge proof kind of one-on-one I have a prover and I have a verifier and I have a zk scheme these Are extremely complex cryptographic circuits that essentially, you know, if you the classic example I like to give is um You have somebody that knows basically um You walk into a cave and you have two areas or two um Pass to go down at one end of the correct cave. You want to be able to prove that someone is Uh, that someone knows which is the correct way to go to get to the Treasure or whatever is at the end of the cave, right? So you have somebody walk into the cave you're you're kind of like videotaping the the output But you don't necessarily know where which side they're going down or how they're getting there You're just seeing them come out So you're able to kind of ascertain that as they get to the other I've honestly forgotten that metaphor So I'm not going to use it anymore But it's on Wikipedia. It's very popular But the the prover That's basic I have somebody who's making an assertion and someone who knows the assertion of the What I'm proving because I have the data in front of me. I'm looking at my data. I know that it's correct And I have a basically a zk circuit where I can Understand by agreement on a trusted setup What's going in and what's coming out this trusted setup is where a lot of people get hung up But in reality, it just means a bunch of participants get together to create a sort of random number that they all know Heart could is speaking up back there. Do you have any do you have ours or our metaphor for zkp? Yeah, I just it's so comfortable. It's so convoluted. I always forget Yeah, so we have agreement among parties on a scheme that we're going to use Specifically what kind of circuits and again, we get together in this kind of so you've heard of snarks and starks Snarks is some kind of like non-interactive group. It means I don't need to actually check all this information I trust the setup that was going into it and I Get together with a bunch of people and we kind of agree on sets of data And then from now on I can trust the inputs and outputs of that circuit That part sounds sketchier than it actually is If you think about things like the uh, there's something called the kzg ceremony going on right now for the aetherian mainnet Where we only need one honest participant to make a random number that he or she does not reveal And all of the numbers are therefore random enough that this works So every person can collude except one person and this will work Which is the real magic of it. So we get a bunch of folks together. We agree on a scheme And then everything out after that we can trust the circuit that comes in and out So I have my knowledge that heart said I have a witness Which is this and I have the proof of knowledge that can be checked without revealing the content of this Or even talking to this person. I don't need to talk to this person. I can go to the shared scheme I can take the verification I can kind of put it back in a pandora's box and I can know that it's true without interacting That's why they call it a non interactive or succinct non interactive something knowledge But anyway, there's different schemes There's I'm not really going to go into the details. The circuits are those public inputs private inputs and the statements of what we're asserting The reason I'm not going to go into too much details because frankly a lot of the map is over my head And it's very complicated But a lot of smarter people than I have proven that it works And it is a very widely accepted practice in computer science and photography So it how does this relate to what we just discussed? I have different actors in my environments on l2 and l3 I have a People that Maybe just need to check it. They don't need to know the actual data We agree on the scheme But basically the fundamental building blocks of the network that we're transacting upon And we can do this proof of knowledge very easily and very Not cheaply because these machines are very powerful that do this kind of math, but we can do it and Going back to this kind of photo So we have statements of zero knowledge Provers That then verify kind of either amongst themselves so I can verify amongst all of the participants in my little world And I can make assertions that I prove to the whole world and that we can check at full layer So this has a specific ck circuit that's set up at the time the l2 is created again We only typically need one honest participant and oftentimes they have somebody participate and throw away all data Just not keep it so that no one actually knows so there's not really a backdoor into the circuit unless you know fully well Um that everybody that you worked with in the trusted setup colluded And that's a big assumption because if you're honest you can just not collude and that will be great Like I mentioned the one that we're using on ethereum main net had 85 000 participants and I personally Typed in random numbers on my keyboard. So I actually don't know what my input was for this randomness so collect entropy create circuits Make proofs That's where the privacy value comes from because again, I don't have to reveal My data to the prover or sorry to the verifier to prove the veracity of that data Which means in l3 land I can transact all I want all live long day hit the circuit on l2 And have a proof back that we can both verify without revealing my data up here this also Goes into account state I take my the entirety of my account state I can split into all these little pieces But I can still get a unified picture of that because of the fact that I'm using this kind of format We have the this is a comparison in contrast for like what we have with private transactions today versus kind of what is going on with Snark's privacy So you all know and potentially love or hate private transactions Privacy group a privacy group b Privacy group a can see all of the data, but not necessarily what's going on in these notes for now ignore the This this is one base or network and then obviously privacy group b looks at this Uh privacy However, what's not pictured is that the serenotes that need to be running and the contracts and an extra code that That need to be managing all of this information What's also not pictured is the fact that you have a separate state that's being managed in the privacy subgroups To be able to coordinate all of that stuff The difference on the snark side is that we have a unified picture of the state But I potentially only have my verified data So if you go back to the previous picture, I only control the data that I have on my node That represents my picture of the world I put it into that zkp scheme and we have a unified state, but again only my inputs are revealed to A lot more effective at managing unified state without having to set up the actual management of the privacy groups And another of the pros is that you get higher throughput because it's just Well, you know part of it is that you're not doing all these ground trips You're not running on a l1 But you have a unified state your token transfers on scale. That's fully private or as private as you would like zkvm's Sort of invalidate this a little bit because there's flow right now, but they're getting a lot faster The concept of the zkvm is the same as with state Instead of verifying the inputs and outputs. I verify all the op codes and things that go. It's the evm So I put my evm execution as the verified I'm gonna go back to the previous picture I have evm execution that I do I basically use I run a big old trace If you're familiar with the trace logs in basu, it just does a huge dump of the execution And by huge, I mean huge they're like 30 gigabytes on sometimes into the evm I put that into the prover in the zkp and I have a verified Execution of the evm That is an extreme over simplification of the process It takes a lot of work and uses very heavy hash algorithms and super super beefy computers But at the end of the day basu does a big old trace of what happened within the evm We dump that trace into a zero knowledge circuit the circuit proves That the trace that the evm execution happened in the order that it said it did on the state that it said it did And then it posts that output None of that is revealed So that means you have private smart contract execution for the specific provers But you have the state transitions managed across that account unified account state Like I mentioned here here again simplification But imagine you turn the whole evm into one bagel circuit and the inputs and outputs are those provers and verifiers Question the back Yeah That's interesting Let me reiterate the question to make sure that I have it correct So the question is around when you're using kind of this multi-layer model and you're using zero knowledge proofs How do you kind of share the proving schemes across the layers in a way that means I can transact up and down without my data becoming garbage And move it across layers in a way that's cohesive Is that that right? Yeah, so that's the crappy part is that we run the risk of creating the same silos that we had before in this model That doesn't necessarily solve that challenge the So if you look at these this kind of unified version of state that I discussed This does not actually crawl like I mentioned with before when you do that kind of roll up process to move up and down You lose granularity of data for a number of reasons because one you want to preserve privacy often But two I'm only posting like the results of all of this to l2 So that doesn't mean that someone can come say, oh You know so-and-so has an account over here that I want to be able to pull that data You would need to basically request that data from the provider themselves. So we do run the risk of creating innumerable silos again The counterpoint to that is that it doesn't necessarily have to be designed in that way You can design it in a way where I can traverse those links and get the data Because it's you can unroll all of that state if you have the appropriate access, right? Well, right now it's you mean the roll-up state on l1 So that's entirely dependent on the on the roll-up operator to decide It's all the ethereum standards. So the question was How do we standardize is there work being done around standardization of The movement of data across these layers. So once you get from from l1 and l2, it's identical Ethereum based standards. All of it is the same on l2. The data is typically available and an optimistic roll-up Sense it that has to be the case because otherwise you can't prove fraud in a zk roll-up It's based on this proven proving scheme where you can't even make a Modification to state that isn't valid because the proof it won't satisfy the zero knowledge proof So it is dependent upon One the roll-up operator What do they want to do is right? It could be a public l3 or it could be an l3 that is completely opaque And they only inherit the security architecture of l2. They don't inherit Like the transparency And all that good stuff that we like blockchain for in many respects because it's a choice So they could have the most opaque state in the world and just take advantage of the security budget of l2 and share nothing so When we talk about regulatory compliance and data privacy and all this stuff It's mostly a matter of getting People comfortable with the fact that you can't really unwind this data if you choose to make it so It's just about the security assumptions, which is a hard pill to swallow if you're a regulator or You know someone else Yes. Yes. So you're doing that. But maybe that's all you want, right? Maybe all you want is a cheap state machine that allows you to keep consistency And consensus among a state group of participants for low costs That is one great use case of l3 um Another one is like I said kind of like Just i'm running a world that has custom rules, but maybe it's open. There's permissioned and public l3 It's like you can kind of design it. However, you'd like public permissioned is the word that a lot of people will be using around this You've heard of private permission public permissionless Public permission and public permissionless are both applicable in this instance The reason it's public is because of the security budget and all the other stuff that I mentioned on l2 and l1 The data is not inherently public or it has to be permission I've set up where one thing this could have its own sequence It's a bunch of accounts that they can support Conduct and the other trade mess would have another sequence It comes to l2 It sounds like they won't be able to Accomplish that they won't be able to see each other's transactions, but in terms of States Transitions It's all handled so you want the trading assets to be separate correct I want the trading assets to be separate. I want them to also perform very fast Yeah, but I don't want them to Know each other's strengths. Yeah, so this is the question was around like um basically l3 and Privacy as it pertains to individuals state held on those kind of environments Let's say and the environments in this case are two trading desks It doesn't even have to be that complicated Node 1 node 2 trading desk 1 trading desk 2 trading desk 1 keeps a record of these accounts Like it's literally Ethereum accounts trading desk 2 has access to these accounts You know other trade that's short And they in the zero knowledge format This state is technically unified across all three But I'm doing the proofs prior to commitment up to state in many cases So I can Transact and do all that good stuff On my local, you know situation presuming that I have access to all that data I prove it prior to even sending the sequencer I can have one sequencer that can handle all of this and the state is unified via the like the snark scheme as opposed to me saying You know, oh like Let me have a separate roll up that we can both inherit now two These are all one kind of network And they split the state by these zero knowledge proofs And they only verify what they have And then the sequencer and the world state of these basic nodes Coordinate to say this is the full picture of the proven state based on the private or based on the zero knowledge circuits So it's a real that's like super powerful stuff like if you know I want to again caveat that this Your world where they have 1000 pps is not real. Yes It's too expensive to run the machines on these circuits now We expect this cost to go way down Because they'll be basically Specific hardware to do these kinds of proofs as they gain more proliferation Yeah similar to like azix but just for these kind of ckps But yeah, so to reiterate what I just mentioned You can have a unified slash private state and verify the correctness of the entire state by using the same commitment team And having the same nodes kind of gossip around already proven state transitions and account storages basically, so I say oh I transact You know 300 from here to here. I prove that whole thing. I push it up to the main picture and then they Correct state transition is proliferated to other nodes as gossip And they just say oh i'm applying this state transition I check the proof really quick because checking the proof is really easy Generating it is not so that's the cheap part that harbors mentioned I'm basing node b. I see the proof come in. I check it. I say that's cool. That makes sense I update my state without knowing any of the data underneath it So these trading desks can stay in sync Without knowing what any of them are actually doing as long as they're on the same commitment scheme Which is a relatively simple thing to get commitment from especially in a business context Where if you're business a and business b in your different banks You're gonna Sure as hell not share your randomness with the other bank who's committing to the same kind of setup, right? So The collusion is a lot harder, especially when there's competing business needs on the same kind of marketplace almost It's really cool. Like this stuff is There's a reason i'm talking about it. It's very interesting stuff and it has a lot of potential um As i'm the only caveat that i mentioned is it's kind of slow for now. I think it'll get a lot faster Yeah, exactly It's just it's to scale more because you want to be able to divide and you get to scale For now, like I said for now. Yes, but but if you like so this could be kind of like This could be an l3 or an l2 world You could set up a asu bank l2 And I just post to ethereum doesn't need to be an l3 This could be a multi perverse scenario where I want to work with other banks that i'm Not necessarily trusting to sing singularly sequence transactions So if I don't trust another party, but I have a singular sequencer They can kind of conclude to move the order of transactions in a way that might benefit them Because the sequencer is the centralization point. That's why we have in this scenario We want l2 to be kind of a multi-prover multi-sequencer environment where they can't collude across the sequence of those transactions because You know, I don't necessarily know first of all what's coming in in order to collude because I'm just getting caches And I'm kind of just blindly executing data here But there's also like for example, if this was all run by one operator Then they could say oh, you know, I don't like The way that these look for maybe me be the purposes or something else. So I'm going to swap two transactions Um, but the the multi-prover environment. This is where it gets a little complex It's basically decentralized sequencers Decentralized provers, which means my transaction is routed and split up into different pieces And it's kind of impossible to Know what's happening and to collude like that. Again, this is an aspirational photo We are trying to build a multi-prover world. There's active discussion among layer twos in building this world right now But frankly, if you go to that website d5 beat they for l2 beat There are a ton of trade-offs in security in like the security perspective right now So it's not built quite yet Yeah, or you yeah, you execute between then, you know, there's 12 second block times on ethereum l1, but I might have 200 blocks come in At the same time There are 200, you know sets of transactions that I kind of move all together in those tying your state transitioning commitments because the zk Uh, the commitments are also really small same with optimistic role of commitments Since you're just basically saying this state transition happened as opposed to talking about the entire state transition It makes it a lot cheaper and smaller and we dump it here on l1 I don't quite remember that but I mean In theory if you shouldn't need Kind of like a middleman and anything. Do you mean like? On on on l2. Yeah on on the layer below you you only care about the output and the input You would need to commit all these changes on the originating layer because otherwise you could double spend All right, so like if I a pays be a pays see Or a pays be be whatever Yeah, these are they need to keep track of everything, but this guy only cares about the the final results and kind of the inputs That's the this is the industry perspective. Yeah So the question was what is where is this vision coming from and like I mentioned these are different companies Like we're working with other l2 companies to build this kind of vision because we know that a single sequencer Single prover approach is will not be trusted because you can collude on a single like Like I mentioned a single person can come in and kind of be like well We don't you know a single sequencer single prover scenario It's semi-trusted because the users can Kind of prove what's happening, you know, they can make fraud proofs all that good stuff But the there's there's the ability to collude when you have a single organization Centralized sequencer because they can order transactions inferior in a way that benefits the operator That's why we're working towards this kind of multi prover scenario Any more questions here? if in a perfect world basu Execution lives here basu execution lives here, and it lives in every one of these So basu is executing evm code. It's bashing up transactions producing blocks And they're sequenced and proven by the l2 infrastructure not by basu basu is not attending to create like zero knowledge Proofs to do this stuff. It's not intending to sequence the transfer You might be modifying basu to become sequencer appropriate, but So it's not meant to like do the heavy lifting of the like proofs and ensuring certain things It will record state It will produce blocks post blocks. It will execute Things in certain Certain instances as evm that i'm describing in the consensus example does not use basu's evm It's basically a custom evm that is built around zk circuits But basu as was mentioned earlier has a evm library that can be ripped and replaced Um, so we basically do that and we've we're modifying other components of basu to make the rest of it work And again, I want to make sure that it's known that we're we're intending to generalize this so that Asu lives at these different layers and we use different building blocks to get there. So this is a great segue to Where basu architecture, uh, I think i'm you know, we'd like the project to go as far as public networks are concerned um merge review cap Consensus this is the pre-merge picture on the left and the post merge picture on the right pre-merge picture you have paw consensus wrapping the execution layer in complex wasteful map Now basically the block a lot of train infections come in a lot of execution happens And then you wrap the security of this in the complex map. That is Private networks do the same thing, but instead of p o w write q b f t i b f t What happened this basically is unchanged with some caveats Um, it says it wraps evm execution secures it on chain The same thing happens kind of on the right hand side except Instead of wrapping evm execution in p o w We wrap it in a whole new process that is built purely for proof of state in the ethereum context So again, this is just the ethereum context Um, we have its own layer known as the beacon chain Beacon chain is separated into slots instead of blocks blocks can exist in every slot And they basically do but not every slot is required to have a block The slots also have attestations So these are the witness signatures essentially that show that that slot is valid It's other nodes that go in they are separated logically And they attest to these things and it iterates over the entire set of validators in the network so that Pollution is harder and more difficult Super simplification We still have this consensus layer that secures evm execution because if we didn't have Have agreement among parties on what actually happened in those transactions. It's the same problem I talked about in the rule of example So we have to prove what happens in here is valid And the consensus layer provides a bunch of witnesses signatures And all that good stuff which is aggregated and then put into one block that is this whole big thing Which has attestations It has potential validator exits. It has withdrawals Which is not in this whole diagram. It has the you know signatures from the the individual block proposer And all this stuff So where does basu fit in um, this is from our very own documentation Execution clients such as basu manage the execution layer Including executing transactions and updating the world state execution clients serve json rpc api requests and communicate with each other on a peer to peer network The real magic is this engine api This was created to allow those two layers to seamlessly communicate with each other This is a closed port can be extended But it's like basically a closed port that only exists to have these two nodes talk to each other The consensus layer receives The information about what is going to be happening with the next block. Is there a reorg? Is there a It's called a fork choice update. Is there a new payload of transactions that I need to secure? They send all this information to drive the execution client the execution client says, okay. I've got a block payload I've got a consensus payload. I'm going to do some EDM stuff and I'm going to post that block to the chain Small difference consensus layer uses arrest api execution clients use json rpc Different p2p networks. This one's called lib p2p. This one's called dev p2p Similar enough they gossip different things and they're meant for different stuff This is going to become a lot more robust. This meaning the consensus layer the folks on the call They're starting to build more and more functionality into the consensus layer now that we have a separation of concerns And focus this a lot more narrowly on EVM execution and world state management Think about it. All you really need to to feed the EVM is what is currently the state of ethereum They don't give a crap about the other blocks really they do in so far as the world state But they don't necessarily care about old stuff and consensus-based mechanisms. So they defer all of that up here That's why it makes roll-ups a lot more Interesting when you think about execution clients because all the consensus stuff is outsourced to the roll-up sequencer Some of that other stuff for these different components. We don't care about how consensus is reached We just start spitting out data as fast as possible about the world state and sometimes in the EVM now, so we We might build a light client In the basic that allows you to connect and sync the chain, but you will never be able to validate Blocks or run a validator with just basic. You will need to get a next consensus client like teku lighthouse prism Memphis slow star or whatever. Yeah, we There are light apis that allow you to connect and sync You will probably be able to notice the base you soon so you can follow the chain and run data queries We are having to run another layer But not built quite yet. You need currently to sync magnet Down here And it has a big old bridge that goes to here And that's pretty much the only evolution of those kinds of networks in terms of layer one on main net You could I'll talk a little bit more about it But yeah, basically sits On this execution client on main net because you have a bridge contract That's keeping track of the private state of the qvft network And allows me to relay messages and transactions back and forth And there's things like atomic swaps and cross-chain messaging But it basically sits in a little chunk of this main net execution client that spreads every Ethereum node there is Yeah, there's a variety of bridges not the only thing but in terms of like a base you qvft network, that's kind of the option as it is now Depends on the data that you're looking for things like if call all the good stuff get balanced That's all down here. This is typically data This gets like the state of Consensus on the change is these are the people that need to vote soon These are the votes These are the peers that are doing this this and this there's like subnets and committees and all this crap That's how it appears This is where the bulk of the data because that's where the ebm is that's what the contracts are This is where the account balances are the good stuff still sits here But you know, I frankly really like this picture because it separates out all the stuff that typically bogs down these clients and it puts it up here and it lets this focus on Executing blocks as fast as possible and checking veracity of data Any more questions on the slide? I only have five minutes left We have two client abstractions in day two public network and the private network This is kind of what i'm getting at throughout this whole thing right here The engine api allows us to have pluggable consensus going forward This is a somewhat aspirational picture However, it's true, right? You're directing the evm From the consensus layer and everything else exists to support this Literally everything else except maybe Here literally everything exists to support these two things Which drive public network consensus and that could be proof of stake It could be roll-up consensus api I can fake it. I can run a sort of proof of stake client That makes use of these engine api calls to deliver payloads to basis evm And that's what we see in roll-ups in the proof of authority roll-up that we built on base. That's what we do Kind of fake it for the engine api So this is going to blow the whole thing up and and I this is a version spec So the core devs consistently update what this does and how it looks and we consistently update formats to support new data But the key part is that it's extensible This is what you know, I potentially love We have the privacy components here with the sarah We add in a sub protocol specifically for vft related consensus It allows us to network in the appropriate way that Antoine touched on earlier And then we have the actual Consensus things and we still have proof of work, right? We still have vpash. You can use it in the private network I don't know why you would Pluggable consensus it's still it is pluggable, right? You can choose what you want everything else We want to decouple these abstractions over time I would like a lot of these pieces To not be so close together. I'd like this to live over here. I'd like this to live down here I'd like this to live over here And I'd like all of this stuff to be In order to support both of these Namely this these two this the qbft consensus and these pluggable consensus things We've done a lot of tight coupling With these sub protocols the protocol schedule And it's turning into a spaghetti mess and we're looking to Look this out when I'm talking about split this out. This don't worry about this. This is not a functional change This is resolution effect that we're looking to decouple these over time and make them more usable and hopefully more extensible because in reality Ethereum node looks like this an RPC interface in the outside world Networking that goes back and forth to the RPC interface potentially to other peers Ends up in the transaction pool goes to the evm, which is frankly optional in the case of basu Goes to the state storage and back goes to the consensus mechanism and back and then it's a round trip It again oversimplification, but we really only need this And all of these pieces can kind of like Come and go Oh That's an awesome idea and we haven't explored that but i'm gonna write that down Right now So again, we are trying to decouple these pieces because we want a bunch of different stuff reusable building blocks Mainnet pos chains I have all the pieces I just mentioned this blue box is no longer a basu box. It's kind of a basic box So I gave a little basic components Look at how similar these two things are roll up state storage roll up state I might need a state to keep track of Attach so what's happening on layer one And in the roll up so I might need to add a new model. That's why I have like a double box Sequencer still externally driven consensus However, a little bit of green for the basu box because we might still build these components in basu Anyway, same things evm is optionally group it out fiber network. Exactly the same Rich contracts are a layer So like these were kind of throwaway slides, but what i'm trying to say is all these components are the same But the basu picture looks like this It doesn't look like these discrete boxes. We're trying to get there so that is I think the perfect Segway for me to go back to the roadmap, right? new rpc formats Talking about the engine api talking about rpc for the kind of traction This is the first step. This is not a complete thing Down here trying to build flexible roll of packages That means separating all those little boxes so that we can stack them how everyone it might mean Instead of a basu driven consensus engine I deferred to something like the optimism sequencer and basu simple on that you can be at but not really So again, just trying to separate all those building blocks and different components Ultimately to go here where we take those building blocks and we get all the chains for free if we want them Frankly, this is probably the biggest lie on the whole slide deck to see but We're going to get aggressive with this going forward We we need to split the code base open because I want it to be easy to maintain private network features without Breaking stuff because we want more people to maintain the private network features So we're really looking forward to continuing this modular work to get those blocks To be discreet so that we can get to where we need to go Now brings me to my last point and then we'll do quick q&a hearing and contributing to basu and core development All of what I just said is again my Plus like the consensus vision. It's not the only vision for basu We want everyone to contribute. How do we do that today? Basis primates primarily maintained by triggers and consensus world labs These contributors are primarily focused on public networks. That's true of almost everybody probably youtube maybe Again went through labs connor and his team are going to be helping us a lot with this going forward We want more people to help take on this mantle. We have existing guidelines. I'm sure this deck will be shared By heart that this links you to a page that tells you exactly what you need to do We have bi-weekly contributor calls They are open forum and we have multiple time zones to support australia and asia and the us and eu time zones We have A public dental board. You can go see exactly what our backlog is what we're building what bugs we have prioritized You don't see your bug Sorry Get there And you can check over the gap. This is mainly maintained by me. So you'll probably mostly see consensus focus moving things around Again, I would also like that to change Code contributions. It's very straightforward Again contributor calls. We have a lot of labels examples bugs consensus related issues Dependency updates developer experience This is one to call out doc change required if you update something that needs a documentation chain Use this label and the doc cmic consensus will come strip it up and fix it Um good the first issue You're looking to start contributing code go here. It's very simple stuff Uh, you know an etc label if it's for a theory in classic the acting label if it's permanent improvement proposal yada yada yada We try to make it easy really uh, you need 10 or so pull requests to become an active maintainer an active maintainer can approve Pull requests into the main repository That that is to say you can still contribute without being an active maintainer Your contributions will always be reviewed by one of the maintainers. We try to do it within a few days So we love external support. Um, we actively triage and support all of those things. So if you're building PR it's basu You'll have engineers that have been working on it for years reviewing it within a couple of days We are working on additional technical documentation. I'd like to do things like breakdown the evm library Breakdown the protocol schedule um and more I'm thinking that we will do Some kind of content series where the basu engineers Run through what a lot of these abstractions are And what they mean like similar to what you did but like actually kind of break down the protocol schedule Break down how all these different network types work There are certain things that when you touch them the code mates will have a weird time However, we have a really robust set of tests and pipeline so the Anytime you've submitted pull requests to the basu repo. They'll check it against our um pipeline and We'll hopefully highlight some stuff Yeah, so one good thing about that is in two weeks the basu incentive program will pay for that ci So once withdrawals are enabled We will hopefully have free infrastructure We hopefully will have also more money if eith goes up a lot to pay for bugs and grant bounty to do stuff One thing to note that I actually don't know if I put in the slides is that basu participates in a client incentive program Park mentioned at the other day If you contribute on basu up to a certain point on mainnet related issues, you can literally get paid by the ethereum foundation um That is good optics for organizations. It's good optics for individuals Whatever capacity you choose There's also the protocol all guilds These are whole bunch of other topics, but to say Building on mainnet is rewarding building on basu rewarding I'd like to use the remaining ci key funds to incentivize all kinds of stuff not just mainnet but you can get paid to it's Yeah, pop into the basu contributors channel here. You've got spare computing power and brain power We want it Any of the existing processes I just mentioned could be changed. We have an open governance process we have relatively Flexible stuff that goes in the contributor calls and we're pretty active about accepting and moving codes You only need maintainer status to approve prs in the main repo doesn't mean you can't code today and get prs approved We hang out in the discord You can reach all the engineers and contributors here Don't post technical questions here post process related project related questions here. We have other channels for Some new proposals I would ideally like to do what i'm doing now on a quarterly basis or a half one half basis a product update of some kind Stereco, what have you? Because i'm getting a lot of looks slash questions that have proven that these things are useful and people are learning stuff so Doesn't necessarily even have to be in person. I'm very comfortable to sit in my house and to lecture for an hour. So I you know Again, this is a new proposal by annual core development review to align ethereum improvement proposals and standards That's mean basically what core depths changes are coming that will impact what you do you choose to use the latest versions of That will be again, these are just proposals I don't know if anyone wants this but I will I already do these calls for consensus and for other organizations I can either open the existing calls and just report and share or I can do the same thing that Q&A and interactive session Where I break down everything that's happened in ethereum for the spirit this one Who knows public issue triage call. I triage issues once a week sometimes less sometimes more We could do that on like a rolling basis and people could send representatives and stuff question the back I don't have that on mainnet the core differences Come down to some small differentiators as far as private networks are concerned Basu just slams them kind of all in terms of functionality and features There are a million comparisons online, but I specifically don't have one Gap is the gold standard fastest execution speed But a lot of storage basu has some of the slower execution speed But it doesn't really matter once you reach a certain point However, we have the lowest storage requirements of any of the main clients Nevermind sit somewhere in the middle And their focus is just providing a good validator experience Now they cannot so it depends well it depends on so the question was Can layer 2 participants access other information on chain for the other transactions And it's up to the design of the roll up But from all the ones that I have seen privacy is a key selling point and they do not reveal Transaction details to each other Any other questions in the call or in the chat? All right. I'm going to breeze through this in some over time Why does core development matter? This is uh Changing gears now when I talk about core development, I mean a theory in core development It's just a loose group of people that anyone could be involved in that develop the theory of protocol Getting involved in core development reduces platform risk for you and your organization As you move to public networks and even on private networks taking the latest basic updates More voices in core development deepens collaboration and opens the door for progressive decentralization of your organizations And also for bringing the values of larger institutions in c5 to Good give and take This is what we really need an understanding of business and regulatory requirements to the core development community The ethereum developers are all they care about as a technology side We need more voices talking about regulatory requirements And how we can service them In core development, uh, that could be Standards that we can iterate on with both technologists researchers and kind of regulatory folks and you know financial folks It can mean a bunch of things But in reality, this is sorely needed Because we just build the technology Essentially in a vacuum Where we need to start caring about financial play we need to start getting involved in regulation in order to bring more uh commerce on Web 2 to web 3 migration requires collaboration and core development protocol updates are constantly in the news if you care about And in your name, my name is all over a bunch of random crap now all core devs execution call There's two calls Once a week every two weeks I am part of the execution layer call because basu is an execution layer clients Uh, it's the execution acde is what's called This is the biggest decision for making body of ethereum protocol development at the stands right now Um, it is fortnightly, which is my British colleague way of staying every two weeks on thursday At roughly 11 Eastern for 10 Eastern at it Flash streamed on youtube zoom you can join the zoom It's not a closed group. Like I said, anybody can join the zoom. They share the link at discord And it's mainly client developers and r&d researchers But we've started to see more and more interested parties like coinbase dpi applications Essentially some actual workers from banks and other things But I want this to go up There's an ill-defined consensus process Among how we include things in hard forks How would we design the protocol and it's typically based on urgency of what's next All core devs consensus layer call same thing It boils down to eip's. This is the decision-making process Did that go? Oh, this I don't okay. I'm not going to go through this. There's five types of eip's This is the one you need to be worried about Or eip's changed the way that basu works fundamentally It means that we need to change Something via a hard fork and if magnet requires a hard fork your network will require changes. I'm going to say that right now If you again, this is pursuing that you update your basu nodes at some regular data ERCs, this is where I see most of this group actually getting involved These are smart contract level application standards that I think we can start to bring in more Regulatory compliance and some of these things And once a counter abstraction launches, this will blow the door off of what we can do on behalf of users Which I think we will need to do in terms of regulatory bodies Things like, you know, social recovery FB like insurance all that good stuff. This lets us this will let us protect users. These are simple They don't really matter too much These are typically both in relation to the core eip's so if you do networking gossip stuff Because of this you Meta eip's are about information changing the way that ethereum does stuff Not super useful for this group informational eip's are about design And they're just kind of little bulletins okay Sorry for the 15 minutes over but Uh, do we have any additional questions? I know I reached through the core development stuff But I'm struggling to think of a way to get this group involved in like a kind of If your toes way it is a very much encompassing process takes up a lot of my time But It try it's think of it as developing the internet, right? We wanted the internet to work in a certain way because it's where we do business You see your business is moving in this direction. You'll need to be involved in the development of the protocol And whether that's just consuming information being aware of what's possible or directly steering Eip's I think it's valuable To engage and I am happy to be a conduit for anything related to that Any more questions?