 So today we're going to be talking about the safety economy and how it's moving from monolithic to modularity We're gonna introduce our speakers here. I'm the host Victor Buñan Big supporters over there. Um, I got a taste for Columbia Anyway so We have an amazing panel before us Introduce myself on protocol specialist at Columbia's cloud. Columbia's cloud is an infrastructure provider We run validators and nodes across and then like 30 or 40 different blockchains at this point We were on a good number of ETH ETH validators And so we're extremely invested in the success of a theorem ecosystem paying a lot of attention to this transition from the You know what is currently a validator, which is a you know It's a box somewhere that has some software on it The keys are either there or in some other box and it's pretty vanilla and it goes and it does its work And it you know performs as a consensus duties But the trend that we're seeing right now is that there's a whole host of essentially middleware solutions that are changing And modularizing the validator experience and so they're almost kind of like plug-ins or extensions You can think of it as it's a very crude example This is a fairly recent phenomenon You know flash bots and MF boost was you know the first and primary example of it You know reaching an enormous scale within the Ethereum ecosystem But with the advancements coming from you know the teams up up here and also other teams working in the space We expect for this to be an extremely exciting And and pivotal moment in the development of blockchain infrastructure on Ethereum And so with that I'm going to pass it over to the panelists to each introduce themselves and their projects We're going to give them a little bit of time to not go super technical But like it's important you understand what like each project is because it's really going to inform the rest of the conversation And these are nuanced meaty topics and so the directive we gave them is a VC pitch, but a VC that knows what's going on, you know That's so That's so with that Stefan, please go ahead All right. Hello. Hello. All right. This works. Hey everyone. I'm Stefan until recently was at flash bots One of the big things I've been working on there over the last year has been developing and shipping my boost Which is often talked about these days So I'm happy to get into it get into like discussing what it means to develop this kind of software for validators You know mevboost was developed to help solve two very specific problems with regards to Validated appointments and MEV one was allowing access to Soul validators to participate in the MEV market and the other one was to protect client diversity and sort of avoid a future where You know validators would try to fork their own code and create some some technical debt in integrating new new upgrades So the way that it's architected in the way that it works today is anyone running a validator whether it's at home or a massive node operator like Quimase cloud can Can plug in mevboost into their system do some minor configuration and essentially run out of the box and and get better rewards So yeah, that's that's the intro there Good evening everybody. I'm Sri Ram. I'm founder of this project called Eigen layer and also run the University of Washington blockchain research lab What we're doing at Eigen layer is essentially enabling The sharing of decentralized trust from Ethereum staking to anybody who wants to build a new system on top So the core idea here is Staking is the root of trust So when you're you know after the merge, you know, we are in a proof of stake world where no use the Stakers basically put down a stake and commit to black block validation and If they make a error or if they behave maliciously they can lose the stick So this underwrites a certain economic security into the blockchain What we are doing is enabling it to be flexibly shared So for example, you put down your stake and Restake so restake is a new concept. We came up with Restake is the idea that you're using the same stake putting it at additional risk and Committing to doing additional things. Maybe running a new chain running a new service like data availability Running other middlewares on top of this common stake So the exchange here is stakers are taking on additional responsibilities and additional risk and In exchange, they're compensated with some fees or other tokens which are paid for those stakers Imagine you want to build a new distributed system You have to go around and try to create a whole new validation network, which is decentralized Fund economic security to it, which is actually very very expensive For example, just to get a sense of numbers here Ethereum has like, you know, 20 billion dollar worth of economic stake at risk If you wanted to build a platform which has similar economic security You're talking about like you have to pay the stakers an annual APR like 10% right? That's like 2 billion dollar worth of fees just for your other system to be as secure as Ethereum It's virtually impossible So what you can do is you can borrow this massive economic security because you're restaking it You're using it for additional services Anybody can come and build new services on top, thus augmenting the feature set of the Ethereum ecosystem We think of this as a permissionless way to do feature addition to Ethereum So in you're borrowing the Ethereum trust and now running new services It is purely opt-in from the staker side It's not forced on anybody, but the stakers that do opt-in are actually able to earn this additional risk reward dynamic So that's what we do at IonLayer Hey everyone, Colin Myers Co-founder of Oble. We are focused on building what's called distributed validators Easiest way to describe what a distributed validator is is today all validators are seen as one key One entity, one individual It's very singular in nature Our primary goal with DVT is to enable and change everyone's minds that validators can become communities So with DVT what you can do You can take a regular validator You can use a DKG divide up its key into different shares and the four of us today can share a validator together And if Victor's house burns down all things are fine because we use threshold signing and applied cryptography So our node will keep going it'll keep validating the network will not halt And yeah, that's what we're focused on it takes on the form of a middleware and it sits on the ETH2 side Awesome, and I know you folks are all starting out on Ethereum and you talk a lot about Ethereum But obviously it's not the only proof of stake network that's out there There's a proliferation of them and most networks these days are proof of stake How do you think about the trade-offs of sticking to Ethereum versus also starting to work on other layer ones or you know Layer twos and other ecosystems I Think right now there's a huge narrative of like the bizarre versus the app chain I think they can both live and survive I'm an American that spends half my time in Europe and it's actually almost culturally how the world is divided Like Europe is kind of a bizarre all their cities are meant to be lived in everything is close to each other America's like app chains suburbs houses. Everything's real structured and together So the way that we look at that is Ethereum is the bizarre and then who's the app chain? So for us as we look at where to take DVT It needs to fit technically within that network and what's favorable for us are not too fast of block times Because there's rounds of communication that need to go between the individuals that are inside of a cluster So for us cosmos is like another chain that where that could work and is kind of the app chain model They have five to seven second block times For us to work on other chains DVT works best with BLS signatures They're homomorphically additive which enables you to split them up and then re-aggregate them in broadcast in a very efficient manner Cosmos does not have that Maybe they will adopt it or maybe DVT can be fit into another chain In that sense, but today we are most focused on Ethereum However, DVT is something that all public blockchain should use in my opinion to add like more resiliency and at fault tolerance to the network So it comes down to demand it comes down to like where the economic value is it comes down to like where the smartest minds are And I do believe that outside of the bizarre the app chain model is probably the only other layer one that would compete with a theorem And that's how we currently look at the option set very cool We are primarily Ethereum centric and the reason is When you want to build a layer like this, you are basically Looking for where is the maximum pool of decentralized trust because we have basically a decentralized trust marketplace Where do you have maximum economic security? Where do you have maximum decentralization? How can we leverage this and Build a whole bunch of new technologies on top of it So our attention is actually in onboarding newer and newer modules and technologies and One interesting thing that you know if you look at Ethereum versus the other blockchains Ethereum has committed itself to the modular blockchain world and I think very few people understand the kind of scope of what a modular blockchain world is The way to think about it is one thing we all love, you know across all these ecosystems about blockchain is permissionless Composibility right you can build an app and I can build something on top of it And somebody else can build something on top of it together they stand much stronger than any one person could have ever built and Permissionless composibility is at the app layer and that's how all smart contract systems work But we want to bring permissionless composibility at the distributed system level you build a new system I build a data availability layer you build a broadcast layer somebody else builds some other thing on top You know an authentication layer you just pack all of these together and then create a new service So Ethereum having committed to this modular Paradigm where you know there is going to be different things done at different modules rather than all bundled together We're all about unbundling trust Right, so we are actually taking the trust network and letting people innovate on the different modules So for us it is a natural fit that ethereum is the right place to build something like this for me be okay Deciding where to build mev solutions sort of comes from the starting point of where is the biggest problem? And the biggest problem is where you know the most usage is so it sort of makes sense to start from that perspective with ethereum And it turns out that building mev solutions is kind of hard So you kind of solve it you cut you build something on one layer and then you say well, it would be nice to go build on everywhere else, but there's only so much time and resources to be able to do it so The other thing is it's not a one-size-fits-all solution So I think there's some principles. There's some abstractions. There's some research some ideas that can be reused across multiple different places but you can't just reuse the model that you deploy for a certain a certain node architecture for a certain client architecture and Copy-pasted into other chains. It's been really cool to see other teams emerge and other ecosystems That try to solve similar mev problems at the middle layer at the node customization layer I know on the Solana side right the GTO team has been working on it for a little while And it's fascinating to see how different the solution space is right So it's sort of the solution that they are coming up with to be able to outsource the Mev extraction is actually to slow down block times. So Because Solana has this main difference of being so much faster at block production than Then Ethereum to do any meaningful outsourcing you need to have slower time block time so that you can add the neighbor latency that's required And so it's sort of highlights that even though the principle is the same the idea of being able to Outsource Mev extraction from the validator level the implementation ends up looking completely different You know the other thing that I'll know is like Mev Boost is surprising you simple software, you know, like running a consensus client Doing all the peering and the networking is really difficult Mev Boost is just like a plug-in that Is a sidecar to the system and allows it to connect to a bunch of other Resources for receiving blocks. It's just a multiplexer of like an RPC call Yet it's so hard to get adopted, right like such a finite such a small change Which you'd say like, okay, like anyone could implement this in like a day and chip it Is not that trivial when you're like deploying it to a network with four, you know 430,000 or so nodes. There's so many different stakeholders involved so many different interests and The hardest part of developing this kind of software is not the technology itself. It's also being embedded into the ecosystem Understanding what are the goals that are being achieved by the development of this software? What are both a technical but social goals as well as well as economic interests of all the parties involved? And that's where you know 90% of the work of developing these kinds of Solutions lies. It's not necessarily just the technical side. Yeah, I think I think that's such a good point And I think that's something that unless you spend time in multiple ecosystems It's very easy to take it for granted that people, you know Outside of your ecosystem have the same world view about what's fair and like what should happen And who should benefit or not benefit from certain activities and what we find is that you know You can't copy paste my boost because you know other ecosystems don't want necessarily these characteristics, right? And so software just like doesn't make sense as it is But there you know, I think sure I'm especially you know talked about some of the some of the use cases kind of like off the cuff But Something that I would like to do just make this a little bit more real of like what can all of this look like longer term And so what I'd love it is if you could if you guys could like talk about like what are the most ambitious use cases that you're thinking about That you could potentially solve or address like what does that look like? So on our side any type of validator can use DVT you can be big you can be small, you know, whatever doesn't matter What we find most interesting is that using its cryptographic properties to partner professionals with non professionals So when it comes to like, how do you decentralize a liquid staking pool? today most of the pools are run by professional validators or they are run by the pool themselves and over time what you must do is include other people into that validator set and with DVT You can get the consistent uptime rewards and performance By pairing someone up with a professional in that capacity. So Today like let's say there's a four node cluster. It can be a figment It can be a coin-based cloud and it could be two at-home validators And if applied cryptography works the way that it should That validator should have just as much performance And it enables the small person to come in with the big person and then maybe become the big person eventually so taking a node and mixing its constituents between professional and at-home is kind of the Not the tail end of what we're going for But right now it sits at the most innovative spectrum of how we're testing with people and how we're looking to push it forward from our end the the main thing we are quite fascinated about is the ability for the Ethereum ecosystem to become much richer We can start listing out like the top five problems in the ethereum Ecosystem and then like start ticking off how we can solve all of them just by using Eichelir. I'll give some examples Number one the data availability bandwidth on ethereum So in the roll-up centric roadmap computation is off-loaded, but data availability still happens on ethereum So the data availability bandwidth of ethereum even with upcoming upgrades will be in say 80 90 kilobytes per second So when you have a bandwidth like this the roll-ups are of course extremely optimized to actually take advantage of this and still pump in like Tens of thousands of transactions per second. So that is awesome, but in a world where we are imagining a lot of the Digital intermediation fundamentally happens through things like blockchains We want to make sure that there is abundant bandwidth, you know 80 kilobytes per second is not enough ethereum itself has a roadmap for a bit some really interesting ideas called dunk sharding where you can Increase this up to like 1.3 megabytes per second But even that we feel and that's a few years out and we feel this is not enough there are applications you should need much much more and We are provisioning The first service we're building on top is a data availability solution on top of Eigen layer Which can actually scale the throughput of data availability quite significantly We are in our internal dev net at 15 megabytes per second already And I think we can scale this another hundred X in the coming years There's one example of taking one pain point in ethereum and then figuring out how new distributed systems methods can actually come in and solve These things we build we stand on the shoulder of giants We build on top of dunk sharding some of the best ideas out there Just good engineering and open permissionless competition This has done a lot of good to the layer to world You know compared to what sharding was where one solution has to be enshrined There's a lot of internal contesting on which is the right solution Whereas a permissionless competition for each of these different features actually leads to a very very powerful world So another example People think a lot about whether we'll be in a single chain world or a multi-chain world And I think this is not a very relevant discussion for what we are doing. Why? Because it even in a multi-chain world It's very clear to us that ethereum will be at the center of this multi-chain world And why is that what is the center of a multi-chain world is a no the if you think of each blockchain as like a node And it's like a graph you see that the hub node of this network is ethereum It is the most connected it is the most liquid and the most secure these are the three properties you need for a hub node of a multi-chain world and We feel ethereum is the right hub node and so in in this paradigm there are some lacking things in order we see a lot of bridge hacks for example and Can we think about how we can build like very powerful bridges on top of the ethereum landscape? So that's another thing you can do is you once you restake you can opt-in and start running light client bridges for all other chains and start bringing in very powerful inputs into a theorem another example you can think of other things like MEV management right like you You want to do MEV management when you're when a when a block proposal is making a claim that I'm going to follow this ordering rule What makes them hold to that rule if they can restake on eigen layer and then opt-in to new slashing conditions for What they've particularly agreed into like I'm following this threshold encryption. I'm following this auction model Whatever the new rules are that you opt into you can hold by it because you can make credible commitments on eigen layer Some examples of what I think we can yeah, I also think we can just cancel the rest of DevCon. He got it I can later resolve everything I have a question about eigen layer How should like validators think about the risk so like internally to me, there's like okay. I'm my stake I we're gonna get to that. Oh I'm saving the spiciest ones for the end. You want to know you want to do it now All right, I'll just spicy ones now. We can do spicy ones. Yeah, let's do it But but but but here's a here's the condition the spiciness actually applies to each of you Not just eigen layer and I want to hear about the risks associated with each project and the failure cases that are possible Have you changed your mind? Where do we start Well, I think You know, maybe I'll give like a little bit of background in that You know, when you think about a middleware solution that does one thing, right? So today a lot of people run flash bots, right? They run meff boost as part of their validators and they're able to participate in the flash bots software and ecosystem and and all that and the nice thing is that Because it's like relatively simple software that doesn't make like tremendous changes from otherwise expected geth behavior It's fairly well understood and we understand the risk parameters We understand how interacts with relays and the failure cases there But you know what we don't understand is like if you're running flash bots and you're running eigen layer and you're sticking there And also you're part of obel right into your validator key split into four for example And so that that actually creates a very powerful situation where you can have these like Incredibly robust and perform it and like do everything type validators But at the same time it's also a very scary situation Because of the risks involved of like okay now you have you know nine client teams that are doing different things You have three middleware solutions and you have upgrades across all of them happening all the time And so the risk there starts to compound And so that's on the background that we're thinking about and so maybe there's gonna be two questions here one I'd love to hear about you know What are the risks and failure cases associated with what you guys are building like what's the you know worst-case scenarios like that Could happen and how are you trying to prevent it and then second thing that love for us to talk about as a group is like, okay? you know, we are You know all marching and all kind of like making upgrades and doing all the things if I am a validator that is going to be using All of your software is right and also something else right? How do we make sure I don't get slashed? How do we make sure that the Ethereum network remains perform? And how do we make sure that your development processes or testing or whatnot are you know in sync between all of you? And also the other clients are using such as the execution or contest this clients So it's a big media topic Who wants to go first? I'll start with Block times we talked about this earlier So first and foremost like long block times in Ethereum are super important for all this entire middleware renaissance is what we're calling it internally You have these core client teams have been built up over time They're funded by the EF. It's free software It's the MVP viable way that you access the network and now it's time to like build enhanced functionality on top of that Those middlewares must be designed and accredently neutral manner They must be designed with like simple modes of failure for us on our side of the equation You know the biggest mode of failure well first of all like what is the number one reason why everyone's been slashed to date in the network? It's because everyone's been running a configuration called active passive redundancy. It means to get more effectiveness or more uptime You run the same key in two places one is online one is offline this can result in like lots of false positives So you can't have a highly available validator without dbt basically So first and foremost dbt addresses the number one slashable event in the network to date By being able to give you more availability So when designing the middleware modes of failure for us today are missing, you know Just miss your duties and then you take your time and you bring your machine back online And that really only happens if if you leave if you lose more than 33 percent of the nodes in your cluster, right? So we're talking seven of ten. We're talking three or four and different combinations like this We'll be giving a talk tomorrow. Oh, she and I Around how to design dbt at scale while not increasing correlation. So today now where we're at with dbt is like Correlated slashing is one of the worst things that can happen in the network. We try to like avoid that at all costs we believe that Liquid staking pools in dbt will like rain predominant inside of these So it's our duty and responsibility to make sure that it's designed in a manner that doesn't increase correlation Because the worst thing that can happen is a correlated slashing event takes place across 80% of the network who's running the same middleware Obel is a security middleware, right? It's it's different than me be boost where you use me be boost to get more With obel you use it to protect yourself Which in theory will probably earn you more as well So today when it comes to correlation, that's our biggest focus on testing We think it's probably the biggest Risk of the whole future of staking is making sure that correlated slashing events don't take place And yeah, that's where we're at What's the biggest risk go using I can learn there are many risks As many as things that fixes there We've tried so There are really two kinds of major failure modes one is you know, you get a whole bunch of Stakers collude and They're not only attacking the core protocol, but also attacking all these other services So the potential profit from actually your attack has increased because you have a much higher exposure That's number one. I think this is Even though this is somewhat significant. I think it can be addressed quite Quite well and the basic paradigm for why this can be addressed well is we have to compare Existing systems to this new upgrade using I can learn imagine you're running a Whole bunch of dabs and all of them depend not only on Ethereum for service, but also they depend on some Oracle Bridging service and a few other things. That's exactly how the ecosystem is today and Even though Ethereum is giving you very strong security guarantees in terms of the economic security You have all these other dependencies which do not have you know the same same level of economic security or decentralization built in and you're only as safe the daps are only as safe as the weakest link and by Staking the each stakers for example, if you know just to give some numbers if there is 20 billion at stake in eat But like there's not there are three middle-wares each of them have like one billion at stake you just attack the weakest pool and you can actually potentially completely corrupt all the inputs and the Alternative universe is where each stakers all opt-in to provide these services Especially if these services are lightweight or scaled Horizontally, then it's possible that a lot of each stakers will opt-in and when you have a lot of each stakers opting in you are Essentially to corrupt any one service to corrupt any one that you have to corrupt a majority of the each stakers and They're putting themselves at slashing risk at some point this becomes infeasible There is a hardening of security you want to take 20 billion dollar of a flash loan and go and you know Stake and get burnt for 10 billion dollars and going to extract more than that. It's very difficult so Then you have a lot of restaking happening actually your systems net security increases significantly relative to that where we are today Okay, the counterpart to this is the other kind of risk, which is what happens if there are programming errors Okay, you have a bunch of these services that are running one of these servers has like a bug Or even worse, it's maliciously designed to break the entire network You know, somebody is offering a 20% yield things we have seen before and Everybody opts in and at the end of the day There is some you know massive slashing event at the end of this thing all each stakers are slashed and there is mayhem This is our worst nightmare Okay, how do we solve this? I Think this is a this is you know to to get a good analogy at least you know in the theorem ecosystem there has been a lot of thought in how to create systems that are immutable and ossified and The right approach to this is to start with training wheels like layer two solutions today and you have these training wheels where you have governance mechanisms which can backstop risks and And and that's the same thing we will do so essentially there are two grades of services on iron layer one grade of service In which there is what we call a slashing veto There is a committee of ethereum community members. This is not a token doll which you can buy out This is reputed ethereum community members including people building on top in this committee. They can veto slashing events which happened Illegitimately right so slashing happens. It doesn't get actuated There's a gap and in this gap and actually and I'll clarify there because one thing that I get you know We say slashing it's unnecessary slashing on the ethereum blockchain itself It's slashing but via the eigen layer protocol And so what it does is that your ETH that you have staked It essentially gets withdrawn to an eigen layer smart contract and eigen layer smart contract Confiscates some or all of that ETH depending on the slashing condition that you triggered as part of eigen layer So it's like slashing conditions on top of slashing conditions depending on like which rules of the protocol you break And so that's that's what he's referring to absolutely so The governance committee can veto slashing on top of eigen layer and this prevents things like these risk contagions But as these protocols evolve and they have been well tested in the wild They can ossify themselves to another grade which is not subject to any slashing veto so the the only thing that the governance committee can do is to Veto slashing they cannot add on new slashing so the stakers are not taking additional risks But people building on this middle where are taking a governance risks because whatever legitimate slashing gets illegitimately vetoed and so as you grow in trust when you build these new services you've been tested in the wild you can ossify yourself to another grade where you're not subject to the slashing veto and So at that point the stakers have to opt in you have to convince them to opt in because They they are losing one of their core protections either by establishing reputation and you know Testing yourself in the wild. So that's how we mitigate some of these risks It necessarily requires exerting Subjectivity and I think this is one thing that you know the whole blockchain space should take more seriously Is how do we combine subjective mechanisms with credibly neutral mechanisms so that we can get the best of both worlds? Very good. All right Back in the hot seat you go. Yeah. Yeah before I start answering actually I want to get a sense of what the room is composed of so I will ask for sure of hands and and and please participate Anyone who's running solo validator at home. Can you put your hand up? Okay, anyone who works for some professional node operator validator company put your hand up All right. All right a good chunk of you anyone who's building like validator middleware sort of what we're talking about here You can put your hand up Okay, small group What is it? That's a family over there Anyone who's validating or taking on other networks Pretty good up Okay, cool All right, mostly mostly actually professional validators, which is which is interesting. Okay, so risks Mvv risk medbus risks and facts I mean at some point risk no longer risk and they become actual So in developing medbus, I think and maybe any software really It's easy to think about the first-order risks, right? Like what are the first-order, you know possible failure modes? and And you can sort of create a security model that says okay Here are all the different ways in which the software could go wrong or get abused, etc For medbus this was threefold, right? So from the validator perspective you are outsourcing part of your power to these third parties are called relayers And there's three ways in which these relayers could start to misbehave One of them is they produce a block. That's just simply invalid, right? So you believe they're proposing a valid block to the network, but the block is invalid The second one is that they can lie about the value of the block So they'll say hey this block is worth ten ETH, but in fact, it's only worth one ETH So I can way that they can misbehave and the third way is that they could withhold the block So they give you a block you sign it you return it and then the relay just never reveals it to the network And so it causes you to miss this slot So okay, you think about okay. These are the three different things that the counterparty is trusting It is trusting. What are the impact of that and then how do you like start to mitigate them? Well, the validity one right is if If a relay continues to produce invalid blocks over time That's publicly known and so you can see like this block this valid this Relayer is not behaving as it's expected to I can't simply disconnect from this And so the Valder in this case has a power to be able to protect himself from being Sort of attacked and they can also critically notice if this happens to other parties So this is sort of a key part of the of the of the security model You don't want a validator who is maybe going to propose like three blocks a year or something right to Have to wait until the next block proposal to know that the counterparty that they're interfacing with is is Is malicious system of way in some way you need to be able to see it from the state of the entire network For the third one and the block withholding one is the most difficult because there's this problem of attribution Like you don't know if the relay or just revealed to slowly if it's because the validator never like Submitted their block to the relay or and they only submit it to the you know to the rest of the network There is a lack of attribution as to where the fault lies And these kinds of issues are the most difficult to solve when you're building software for validators If you don't know which actor in the system the fault originates from you can't mitigate it as effectively and you have to look at these like wider health metrics for the system the the solution for that specific risk is looking at Is the blockchain continuing to propose blocks? And so you can have this health factor for the blockchain as a whole if there is x percentage of the last You know hundred slots the had a valid block proposal that you can consider it to be good enough if For whatever reason the health factor falls below some threshold You have a circuit breaker in which it says it disconnects from all the middle where that could possibly be causing these kinds of faults And you fall back to the sort of a tried and trusted operation of the system Okay, these are the first order risk I was the following but wait, there's more this is way this is making what we do look a lot more simple This is great for the most complex thing on a panel, but this is this is awesome All right second order risk So this is like the rest that aren't just directly from the behavior of a single node, right? But more risks are emergent from when you look at what if the entire Blockchain is operating the same software, right? What are the economics incentives? What what are like the marketplaces that get developed on top of this and how does that impact the expected behavior of the software? I think this is where censorship is sort of comes into play, right? So you can solve all of the micro sort of risk at the individual layer While still having some bigger broader risk that are more emergent out of the use of the entire system that can't be necessarily solved just through the the initial design they sort of become second-order effects and Some of them are easier to predict than others And it's sort of a question of iterating on on the ecosystem of the solution both at the technical layer But also at the industry level to try to make it at these Yeah, and I think that you know censorship is a prime concern for You know the whole ecosystem right now, and it's been a prime car You know topic of conversation throughout other panels and talks at DEF CON So yeah, we're all working on fixing it together That's actually one of the things that eigen layer is often mentioned in E3 search post about like how it can support Potentially solving that problem. And so you guys talked about the individual risks Associated with each project, but then I'm gonna go to the next question Which is that how do we deal with the amalgamated risk profile that results from using multiple middleware solutions? Why why is that funny? Amalgamated, that's what it's word. I don't know if it's right, but it's good. Yeah This is like a I'm an immigrant and so we don't have preconceived notions about words like all words are equal to us So I'm like all English words are nice. I like them So where we're at here like fact of the matter is there's really only one for dominantly used middleware and it's any of you boost So there will be many more and I mean it you may Like pave the way for all of us to see that it's actually, you know doable So like today there aren't combinations of middleware is happening We actually recently integrated Caron, which is our client into me boost and now a distributed validator can propose blinded beacon blocks Which is cool. So actually that kind of opens up this new entire landscape where like a Validator looking at the mem pool if there's 10 people in a validator all 10 of them have a view on the mem pool And then that has like and since there's a consensus mechanism built inside of it That opens up like a whole new paradigm of not only what any V looks like but also like what security looks like So we have validators combined with any of you boost running on test net today We don't get to propose very often, but going through that process of testing it and figuring it out Fortunate for us. They came first. I like really don't think it would have been smooth If like DVT and any of you boost launched at the same time. It was kind of the natural of getting to the merge Let's get any of you boost near main net. Let's change it from like a client into a middleware It let's merge and then after that now we take on the next middleware, which is like DVT or others So I think doing them in phases as a community is like super important. I think that you have kind of Unknowingly designed it that way and that's kind of how we interacted the client teams for example It's kind of like wait in line, you know and your turn will come up So now that we're seeing more middlewares come out and they're getting more use. Yeah, what happens when they sit on top of each other We've been looking at it less from the risk perspective and more from the opportunity perspective But through that finding we'll probably find what what the risks are. Yeah, and I'm glad you mentioned the client team angle That's actually the next thing I want to talk about because I think it's super important Any other takers on the amalgamator risk on the amalgamator opportunities I already mentioned for example that you know, you can do MEV type things on top of Eigen layer, you know That's one one set of opportunities under the set of opportunities is can you build distributed validation for some of these other services? built on top of I can layer Because again the same set of reasons why you would need DVT on top of a core layer also applies to services built on Eigen layer, so that's these are some of the touch points and interfaces I think one nice thing is the core Eigen layer design is kind of as a sidecar. It's not directly touching the client so we are basically Add-on right so opt-in add-on. That's the two two aspects, but there are some touch points between these different One of the other interesting things to mention about Eigen layer and opal is for like the long-term Goal of this cryptography project is to deal with what's called the lazy validator problem So today it's not like cryptographically possible to like Objectively prove who in a threshold signing scheme was not doing their job So I can layer can't fix that. That's more like moon math that can fix that But then it comes down to how do you solve that once you can identify it and that lazy person in that DVT cluster? You can disincentivize you can punish you can do a variety of different things Today the only way that the industry has thought about punishing that actor in our sense would be to like create a token Make everyone bond oval token to the node Disincentivize slash that token and to your point we would have to create our own trust network So at the later tail of dbt and it's more mature state The goal is to be able to use cryptography so that a group of people can run a validator together and not know each other They don't have to know each other. They don't have to trust each other But to get there you have to deal with the lazy validator problem And today the best way to do that is to create a new trust network, which is just you know 2017-18 all over again So there's like things that we would need for the later tail of what we're doing That Eigen layer is trying to build Yeah I have a question. Yeah, please Is it better if we like have a world where All the middleware solutions can sort of innovate right and like throw new ideas at the wall figure out what happens and You know what sticks if it gets adopted then, you know, it's just all core devs problem Now and they like have to deal with it or should it be that Each of these new middleware solutions have to figure out sort of their own governance mechanism over how to Continue maintaining these and shipping new features and how does it fit with integrating into the principles of ethereum and to all the other Middleware solutions like it built is there like one path that's better than the other By the way, I really regret showing stephan the questions ahead of time So um in our case, uh, do you know the front run? So what was the question How do you feel about front running No, his question was like, you know, you guys are all building really cool stuff But it's you know, it's sidecars. It's like, you know different clients And so like as we mentioned earlier, there's nine different clients that are you know Either execution or consensus that are currently on the ethereum blockchain And now you have this slew of middleware solutions that aren't part of the all core devs Aren't part of the eip process and not part of any established rails by which the ethereum community You know releases infrastructure software and upgrades And so the question is like How do you deal with that? Right and like do you try to you know, do you just ship whatever you want and then just like throw it at the at the core devs And be like, it's your problem now So um ours was a reverse problem. Actually, um I was part of a group of people who were focused on pre genesis for eth2 We spent a lot of time on like enablement onboarding education Then we began to focus on like a post genesis problems And one of the first ones was stake centralization And dbt started off as a research project at the ef And then we worked with them to like build a reference implementation out of it And then we took it on and now it's basically our responsibility As a project to like take that and push it forward That's more in my opinion how the ef is being designed today is like If you want to be a real decentralized foundation, you probably can't ship too much code When you get more mature so like their job are to be like educators and business people And reference implement like, you know, do all the research do the legal work Do the business work enable a community And push out technology that other people can take and run with It also ties to the economic scheme of how things work, right? The fact of the matter is is that the client teams are funded By the ethereum foundation and or in some cases joe And now, you know Middlewares Are not anymore, right? We have our own private funding. We're not relying on the ef our software doesn't need to be It has to be open source, obviously, but it doesn't need to be free So that fact then creates a whole new world of like, yeah, the economic schemes of how it works because Economics incentives and coordination will deem the all relationships And at the base layer the rate like the relationships and the client layer are straightforward. It's like virally left Everyone can use it. It's will be forever free. The ef has given them good chunks of money to do so and now we're at a new layer And we get to create it economically the way that we want to Um, yeah from our end, I think one of the things that has been lacking I think already Colin alluded to is economic models for people to build these new services And that's something we put and the other issue that Uh Victor raised is the question of Whether these should be governance processes which bring on new things or should we let permissionless open competition? I'm very much on the side of like open innovation We I think if you look at the the rate of innovation on the various layers of the blockchain stack You would see that the rate of innovation we saw in the dapp layer is simply Amazing, you know, you can take anybody else's ideas compose things on top and build new things Whereas if you were a protocol dev There were very minimal opportunities for you to express your like engineering and building skills because the only way you could do It is to go and start a whole new network And what we need is mechanisms by which we can actually massively accelerate the rate of innovation at the core protocol layers because it's Entirely log jamming the rest of the applications that can be built on top So we feel like as long as there is an attendant economic model to Each of these middleware being built or for example on top of eigen layer that would be Collect a fraction of the fees and only the remaining fraction of the fees goes to the stakers It could be hey, you have a new token and you have dual staking you You stake your own token as well as you have each staked. So that could be a variety of different models in which These middlewares can become self-sustaining but I But I do understand that there are Some some examples for example flashbacks has done a great job in stewarding MEV towards Away from things like multi-block MEV where And reorgs where things can get quite hairy And I think these should the pressure on these things should be exerted socially rather than in terms of any kind of governance process What about you? Yeah, I I fall on on that side of the camp. I think it's really tricky to design good standards body governance bodies over, you know anything but open source technology in particular I think we're very lucky that we have sort of an ethereum core development ecosystem that's so committed to transparency and openness And it has allowed for a lot of these social consensus things to get expressed directly into how The protocol gets designed You know all these wars and these arguments are having being had in the public Can go away if there's this like formal process by which things get approved then the question is like, you know What's acting and getting these things approved and like how it's it's a completely different game That isn't necessarily about public dialogue and discourse and and that's a big part of it I mean, you know victor you you helped out a lot in the development of Of an mev solution, right? Like the there was this eth2 working group that essentially got started Maybe this time last year To develop the the mev boost solution and bring all the the stakeholders in house Like what do you think is the role of all these different stakeholders continuing forward? Is it like, you know, you vote with your feet You decide which technology that you operate as a as a node operator Which technology that you use or should there be some more active process for involving? You know those use and opinions Yeah, that's a great question It's it's hard to know But I think that as infrastructure providers what we want is by large to be Unopinionated and that we want to take open source software We want to run it in the vanilla way in which it is designed And we don't want to ever express opinions over the state of the network And like what is allowed or not allowed or censorship or any other properties And so when we think about the designs of these various softwares Something that we think about is you know as infrastructure providers We know how to run infrastructure really well And so the things that we focus on are you know performance our security are like all these components that enable us to run Great infrastructure, but when it comes to the characteristics of the design or the the trade-offs That the that the design makes I think over there it becomes much more of a Conversation and a vote with your feet kind of thing And we actually we we did have a a different mev solution come and come and talk to us and they and they came to us and they You know explainers is they're designed to me And I was like that is completely uninteresting to me And that's that's something because they were like well I had a hundred percent hit rate before I talked to you and I'm like, yeah Well, here's why your idea is dumb and I'm not going to do it This is why Victor makes the big books Brutally honest, yeah, I'm loving but direct And so, you know, I think that we try to we try to influence As much as we can in a way that still allows us to you know, remain incredibly neutral as infrastructure providers But at the end of the day, we have to make decisions And I think that the decision that we make as infrastructure providers Always have to be aligned with the long-term goals and health of the network And if we're not doing that then our business is dead and nothing matters. So Okay, we're we're very much at time. Thank you so much And thank you so much for the speakers really appreciate you guys All right We'll be here if anybody wants to talk and if not we'll be outside And if not you can find us on twitter and telegram and all all the things Thank you guys. Thanks everyone