 I want to introduce our moderator because it's not Turun Chitra. This was supposed to be our moderator, but we canceled him because he had to leave a little early. And instead we have his alt, an even bigger brain moderator, wait, where are we, Guillermo. Guillermo is, I want to introduce Guillermo. He is the head of research at Capital Crypto, and he also is a PhD from Stanford like all these other smart people, writes a bunch of research papers, and is going to ask us insightful and probing questions to get to the heart of the trade-offs of bridge design and the future of cross-chain communication. So thank you Guillermo for moderating this. Well, thank you for the far too kind intro, and in fact, because you've said so many nice things about me, that's high on your full of shit, but it's fine. Anyway, so yeah, so I guess as a first, you know, let's start with all the big intros is, you know, we have you three. I know two of you quite well, Chris, unfortunately, I don't quite know you. So that means I'm going to pick on you to start. Give me a second intro of who are you, what do you do, what have you done to deserve this, and what have you done to be up here? Sure. Why would you, who did you kill on the previous slide? What? So I'm Chris Winfrey. I'm one of the co-founders of Hop Protocol. So Hop is one of the only bridges that hasn't been hacked yet. Spicy. Closing in on $3 billion in volume so far. Before I was doing Hop, I was a security auditor. Covered projects like Opens Up and Framework, Augur, DYDX, Decentraland, and a lot more. Is that correlated with the bridge not being hacked yet, or? I think so. I like to think so. All right, I think Hart, unfortunately, you are next. Sure, guys, I'm Hart Lamber. I am the co-founder of a protocol called ACROSS, which is a cross-chain bridge. We'll talk about the design and how it compares to Hop and other things shortly. You won't front-run me like this. Yeah, I won't front-run you. I am also the founder of a protocol called UMA, not this UMA. No relation. No relation, although there is a funny story about how UMA heard about UMA early on, but we won't go there. But we are an optimistic oracle. UMA is an optimistic oracle, and we use this optimistic oracle as a security model for ACROSS, which we'll get into. ACROSS also hasn't been hacked, so I think... Congratulations, I think. Yes. So far, at least. So far. But we'll talk more about that design pattern or the design tradeoffs going forward. Hi, I'm UMA. I'm the co-founder of Sysync Labs. We're a newer project slash company that started just this past summer, and we're working on a proof-based bridge. So also, we can talk about the design. People also call this a ZK bridge. There's nothing like zero knowledge about it, so I like to call it a proof-based bridge. And yeah, we can just talk about how all the snarks work and how our bridge is going to work. Cool, so I guess to start, bridges are kind of a fraught area in a lot of ways. Not just because of the hacks, but I mean, even definitionally, people are like, oh, that's a bridge, or that's not a bridge, or whatever. And there's a bunch of categories of bridges. So first things first, I guess, can one of you, whoever is feeling the spiciest or least spicy, depending on your day, describe to me what are the categories of bridges? There's an ecosystem of different types of bridges. There's succinct bridges that use the power of ZK trademark. There are a bunch of other designs. So actually, maybe before we even... Yeah, so let's set the definitions first. Anyone give me how these bridges separate into their respective categories, so to speak? I mean, I'll shoot and go and say it. I think you can broadly talk about asset bridges, bridges for bridging assets between blockchains, and let's call it data bridges, or messaging layers, or arbitrary message bridges, which are for bridging data. Okay, do data bridges contain asset bridges as a subcategory or not? I think we could probe that. I'm not sure. I think it depends on design. I'd argue every asset bridge is underpinned by a data bridge. Okay. I would generally argue, but there's sort of some nuances, because for example, a cross uses some of the canonical data bridges, too, to bridge assets, but... All right, yeah, let's pause on that first. Okay, so fine. So we have data bridges. And then asset bridges. And you're right. I think you could maybe do a subset thing here, too, with some caveats or astrixes. Sure. And then on the asset bridges, I think one important thing to discuss and potentially agree on is dividing them into wrapped asset bridges, or what I sometimes call lock and mint bridges, where you take an asset, you lock it on its home blockchain, and you mint a representation of it on a destination chain, versus what I will call liquidity bridges, where there are canonical versions of the asset on both the source and destination chain. And the bridge here functions almost as a convenience metric, or a convenience tool, to easily move that asset from one chain to another. And Chris, I want to see if you in particular agree with this, because I think we are both in the category of liquidity bridges. Yeah, I completely agree. I think we're both liquidity bridges. And then I think you could even break up each of these categories into a bunch of subcategories, because there's tons of different approaches, like Snarks, the hub-and-spoke model, and the one that's been kind of the worst in terms of security, which is multi-sig bridges. Ooh. OK, so fine. So you both chose to be what do we call it, liquidity-based bridge? Liquidity bridge. All right, fine. Why not, for example, a wrapped asset bridge? Like, does it matter? Do you care? Is it an interesting thing? So we actually rely on these wrapped asset bridges to exist. So every single roll-up has a wrapped asset bridge, and you deposit into the Layer 1 contract. And a representation of that deposit is minted on Layer 2. So with roll-ups, you're able to kind of secure the whole wrapped asset bridge with a smart contract. But where we've seen things go wrong is when you have a wrapped asset bridge that is a multi-sig, because wrapped asset bridges need to hold a lot more TVL than liquidity bridges, because they're not just holding enough liquidity to kind of facilitate cross-chain transfers. They're actually holding all of the liquidity of that cross-chain asset. And so what liquidity bridges do is we basically bridge between assets that are produced by wrapped asset bridges. I'll say if I recall correctly, I think Uma has a spicy take on this. Well, I want to interrupt and put a little nuance in this, too, because I want to further differentiate between maybe nuanced types of wrapped asset bridges. So roll-ups, like Chris is right, that when I want to create a version of Ethereum on a roll-up, I'm locking Ethereum on it. I'm locking ETH on Ethereum and producing that version of it on the roll-up. But I would argue that's not exactly the same type of wrap. It's different because it's adopting the same security parameters as the roll-up itself. And so if you trust the roll-up, you're fundamentally trusting this supposed wrapped asset. So I almost refer to that as the native version of that asset on the roll-up. The type of wrapped asset bridge that I think Chris is shitting on, and I totally agree, is the one where you literally do just take the wrapped asset, lock in Ethereum, and then there's an oracle or a different security method that's potentially super insecure that's minting a representation of that on your destination chain. And then for the user of that asset on that destination chain, that user has perpetual risk to the security of that oracle in perpetuity. And if that ever breaks, that user that just thought they held this valid asset and left with a donut. And those are the designs that I think are really dangerous and not cool. So sorry, Anna. Yeah, I think your point of these minting bridges having more value locked up in them, I like to think of it as basically if you're a minting bridge, you have the integral of all the volume that's passed through your bridge, whereas if you're a liquidity bridge, you're going to have the derivative. Because at any moment in time, you only need what's going to be in flight. And so I think it's actually like a much more elegant solution to the problem, because you don't have this huge honeypot sitting there that's like waiting to be attacked. And even if it is attacked in a liquidity bridge, that's bad, but it's not as bad, because you just have the instantaneous liquidity you need. Cool, OK. So I don't think I have any additional questions about the specifics of liquidity bridges. I will have additional questions, which we'll start right now, about what is your liquidity bridge? Like how do you describe it? What is the architecture? How does one construct it? What is this thing? Sure, so with Hop, we've kind of come up with this hub and spoke model where we use Ethereum to pass messages between different networks. So these are Layer 2s, or sidechains like Polygon or NoSysChain. And so we leverage each of the native message bridges. So each of these kind of Layer 2 networks or sidechains has a message bridge with Ethereum where we can send messages. And what we do is we actually bundle many messages at the source chain, and then we'll Merkleize those on chain after many messages have been sent. We can propagate this message, or this Merkle route, through Layer 1 Ethereum to the destination. And this is how we can do scalable but very slow messaging. And then we add a liquidity layer on top of that, where we have liquidity providers that kind of front liquidity when a user makes a bridge transfer. And then as those transfers happen, many of them get bundled up. The Merkle route gets to the destination, and then everything is settled. And so that's how we're able to bridge across different Layer 2s in a way that does have the full security of Ethereum if you're using optimism and arbitrum. And then we're able to actually put the stock gaps between each network because we have this hub Ethereum contract. And this lets us support kind of sidechains that don't have the full security of Ethereum without exposing users to the risk of those sidechains if they're on a true roll-up. So essentially, but hop only allows you to do Layer 2 to Layer 2 transfers, or anything where you have one kind of consensus layer, a truth layer, so to speak, that's kind of below it. So how can support any network that has a message bridge with Ethereum? OK, yeah. Got it, got it, got it. Harz, tell us about the cross. So yeah, there's a lot of similarities and then some important differences. So across actually only supports chains that have a native asset bridge with Ethereum. So what across does is we actually only have a single liquidity pool of the protocol. The way to think about it is the protocol is effectively the settlement layer. So you have a single liquidity pool and the protocol will send assets from that liquidity pool to various spoke destinations. That can be slow, that can be painful, that can actually be quite costly. So what across does is we have a third-party actor. We call a Relayer. And so if Uma wants to move assets from one chain to another, this Relayer will effectively front them capital, they'll send the assets immediately, so very quickly, to Uma on the destination chain. And then it will request to be repaid optimistically from the liquidity network in the protocol. And so the advantages of this is since the protocol is kind of continuously rebalancing its liquidity between its hubs and spokes, we like to think of this as a very capital-efficient design where we're trying to use as little capital as possible to do as much volume as we can. And kind of to Uma's earlier point, the sort of thesis here is less TVL better per unit of volume. So the more capital-efficient you can have, not only are fees lower, TVL has a cost of capital, cost of capital you have to pay for. So if you have less TVL, you can have lower fees. But also you have less security risk. There's just less of a honeypot. So essentially, in your case, the protocol and the relayers themselves kind of act as market makers. They are taking on some risk in kind of doing this optimistic fronting. There's no, so the relayers are effectively making two-hour loans. But that is risk, right? There's risk, it's not really market risk. They don't have risk to the price of Ethereum going up or down type thing. If they're, sorry, I should be clear. If you're holding that Ethereum and you're not planning on selling it, if you think of it as stuff you own, you're not taking risk. You're making a loan in this asset, right? And roughly speaking, I still should be compensated, right? In some sense, Ethereum that I don't have, that I could just sell or do something with. How am I being compensated for doing that? Yes, you charge a small fee, right? So basically, you charge a two-bit fee on this loan, two bips over two hours annualizes to a very healthy return. And again, there's ways where this could be more competitive. There could be auctions or whatever else. The other thing that's kind of interesting about our relay design is the relayers, they get to choose where they get paid back. So for example, a relayer, if Uma was moving assets from arbitrage to optimism, the relayer could ask to get paid back on polygon. So a kind of interesting feature of this design is if the relayer is like a market maker or kind of the person that's generally doing arbitrage-like things, they could actually get paid to move their assets potentially to a chain where they wanted them, which is kind of interesting. Interesting, OK. And then I guess we have a bit of a less economic current focus, Uma, but a very interesting bridge design. Yeah, I think our bridge, or what's the synced is building, is not a liquidity layer. It's not even necessarily like a token or asset bridge. I think we're starting off by really focusing on the arbitrary message passing part. So I think that's like a big difference. The high level design is basically similar to IBC, how you have like a light client running in the execution layer of a target chain for a source chain. Once you have that light client for a source chain running in the execution layer of the target chain, then you have access to the source chain state. Similar to how you guys, I guess, use like your, you propagate the Merkle route to send a message. Here, if you have like Ethereum's state route on an OSIS chain, then you can prove things about like, oh, stuff happened on Ethereum. So you could have a arbitrary message bridge contract on Ethereum that users send contract calls to, and then a relayer would send a proof on OSIS chain, for example, that like, oh, this message was sent on Ethereum, and now you should like execute this corresponding message on OSIS. And so when you have arbitrary message passing or arbitrary data passing, of course, on top of that, you can build a token bridge, you can build a liquidity layer, you can build all this stuff. But yeah, I think our main focus is basically on making this like trust minimized cross-chain communication layer to start. On the topic of trust minimization, which I suspect you have at least some opinions of, maybe, I'm unclear. So, first things first, how does trust affect kind of the safety of a bridge? A, and B, you know, kind of the maybe like higher, maybe mid-wit question is, can you make a safe bridge? Right? Like can you like, bridges seem fairly complicated in some way or another, you know? It's like a lot of moving parts potentially, you're dealing with like many distributed systems. Is it possible to make a safe bridge, or are you just, are we all just like kind of screwed? Like is it just like this like, you know, it's just a fee in task of pushing a boulder up a hill repeatedly just to have it hacked or something. Yeah, I think the trust assumptions are really important. So currently most bridges have basically like a multi-sig or a multi-sig in some form trust assumption, and so that's really bad because of course, for something like Ethereum where you have many, many validators and a ton of economic security, your multi-sig's never going to have the same amount of security. So I think the trust assumptions are really important and if we can basically have a bridge out of Ethereum that has Ethereum's level of security securing it, which in a light client you can have if you're verifying Ethereum's consensus in the execution layer of another chain, you can, okay, so it's actually a little more nuanced. You're not getting all the way there, but you're getting significantly more there than having a multi-sig. I think that's a huge improvement in terms of the trust model of a bridge, and I think that's really important. I think, of course, with a lot of bridge hacks, there's some bridge hacks that have been based on the trust model, so like there's been some bridge hacks with a multi-sig where they actually compromise like the multi-sig signer keys, and so that's an illustration of, oh, the multi-sig is very insecure. There's a lot of other bridge hacks which are just smart contract hacks or like smart contract bugs. Those, of course, it doesn't matter what the trust model was. For example, the most best snark-based approach could also have smart contract risk, and so I think it's really important to be intellectually honest that there's risk in both dimensions, like there's risk in the trust model, there's also smart contract implementation risk. How do we reduce smart contract implementation risk? What do we do? Formally verify everything? Do we just go and add in a bunch of stuff and say like Solidity sucks when we go and write a new language that is like Rust or whatever? I mean, Guillermo, you're the crypto researcher. I mean, I'm like a fake researcher. I mostly do like a bunch of weird math, but I could pretend. No, I mean, I think, look, I think that's a whole other panel or conference-wide topic of like, how do you write? But I'm gonna make you give me your two-minute version of it. Sorry. What do you do? I'm the tenant and the moderator, so I can make you do it. Yeah, I think you do all the things, right? You have really good engineers that are smart and thoughtful. Okay, step one, you audit your code from credible auditors or you are an auditor and you still hire auditors. So that feels like an unfair advantage, but fine, all right. But you audit your code. Formally verify your code. That seems like a great idea, right? But formal verification, it has its own trade-offs. I think of understanding that and writing the rule set. Actually, Chris might have opinions on that too. Then you have a bug bounty program, right? And then you also just like, you just last, like part of the reason you gotta like time, the longer you survive, I truly believe the more credible your product is because it is a very antagonistic environment out there. And so just surviving, I think, is increasingly, it just means you're more, the longer you survive, the more likely you are to survive, in my opinion. Yeah, I think there are other smart contracts out there on Ethereum that are much, much, much more complicated than a bridge. For example, I think if you look at the maker smart contracts or the compound smart contracts, compared to a bridge smart contract, it's much more complicated. There's much more moving pieces. And people have seemed, of course, those things have bugs too, but people have seemed to figure out ways to make these smart contracts in ways that are bug minimized or not catastrophic. And so I don't think they're theoretically seems like, oh, it should be impossible to make a secure bridge. Actually, it's maybe even a good question. Why hasn't it been done until now? Because in my opinion, it's not even the most complex smart contracts out there. You probably have opinions on this, Chris. I completely agree. And I think those are both great answers. Like we need to build up our lendiness to, as bridges exist for longer and longer, without being hacked, the more we can trust them to not be hacked in the future. But one thing I wanted to really hammer home, which you touched on, it's if you're using a trusted bridge, you're not just trusting whoever these parties are. It might be Sequoia or A16Z that's holding this key. But what we've seen is that over a billion dollars has been hacked from multi-sig bridges. And it's not that these parties are just like turning malicious and running away with the money. You're trusting these people to have world-class security. This is not just average everyday hackers that are going after these bridges. It's nation-state level actors, like North Korea's Lazarus Group, they are trying to insert employees into your company. They are doing supply chain attacks, where introducing vulnerabilities like deep down into your software stack. So in terms of trust model, it's just you really want to rely on the smart contracts themselves for your security. Yeah, I mean again, let's emphasize the trustlessness and permissionlessness are like, they're not really features. They're requirements ultimately. And again, if you just want to trust somebody, then you don't need a bridge. You can just use Binance as your bridge, right? And fine, right? My favorite bridge is Coinbase. It works, right? If that's what you want. I think the other angle to look at this too from the multi-sig angle is also censorship and regulatory issues. And wait, can that bridge even get shut down? Because someone says, hey, a government says, hey, don't do this anymore. Yeah, I think as the regulatory landscape gets more and more strict, having the multi-sig is going to really be tough just from a regulatory perspective. And I think that's why the SNARK approach or the proof-based approach where it's like, for these like clients, theoretically anyone can generate the proof. Anyone can send the proof to the smart contract on the other side. Anyone can relay a message. And it's truly unsensorable. It's going to be very, very important. One thing you said earlier is that your approach is trust minimized. But I'm kind of curious why you said that and not trustless, because it does seem conceptually trustless to me. Yeah, I think trustless is like a really strong term and a lot of people have a lot of feelings about it. So yeah, we try to be aware of that. I think it's trust minimized because ultimately you're still trusting the consensus of the other chain. So for example, Ethereum has really good economic security, but say I'm bridging from another chain back into Ethereum and say that chain only has $10 million of economic security, I think it's not trustless. You're still trusting the other chains of validators. Yeah. Cool. Fair enough. So essentially kind of the way I can summarize this latter part of the panel is that like smart contract security, anything that's unsafe is just cope. Right? Like you are like, if you're just not, you're just doing it wrong. If you just like have, I mean, right? Like you were mentioning, right? MakerDAO is like this massive set of contracts. And like, I don't know, has it been hacked as far as I know? No, like, so I mean, what is your perspective on this? Like how do you make smart contracts secure more generally? I asked this question, so I'm gonna put you under that. But sorry, it's just happening. Sure. No, so actually, so the most in terms of number of bridge hacks has been just regular smart contract vulnerabilities. And yeah, there is no reason for these vulnerabilities to be bridge specific. You know, a lot of them are signature based or some of them are just, you know, regular smart contract bugs. And so when we design smart contracts, we design them in a way that makes it easier, easy for an auditor to look at the code and verify our security invariance. So, you know, these security invariance are, you know, if a token kind of like starts a transfer here, is it gonna be completed here? Can, you know, multiple completions happen at the destination? Is there some other way for someone to kind of like extract value from the bridge? And so if we can, you know, basically make it as simple as possible to verify those things, the easier it is for auditors to kind of catch any bugs that might show up. Essentially, like what you're proposing is, well, simplification for sure, but also some notion of formal verification or is that not actually it? I think there's cheaper ways or more cost effective ways to get security than formal verification. Okay, and I guess as a follow up question, what does it feel like every bridge has been hacked and not like every smart DeFi contract has been hacked? Is it just like a bridge specific thing? Is it just like everyone decides to hop on a bridge and be like, ah, this is cool, like we're just gonna like put all our money in here? Like, what is it like? It feels that way, because it's kind of true. And honestly, I think it's because it's a new market and it's a, you know, there's a big opportunity. And so a lot of people are kind of just jumping in a little too fast. Auditors were very, very constrained during the bull market. So it was like really hard and really expensive to book a good auditor. And, but people still want it to be, you know, in that market early. And so we saw a lot of them get hacked. Yeah, I guess I'd say like, Chris, correct me if I'm wrong, but I think all the hacks are actually smart contract hacks, right? Like all of them, none of them have been- No, I think some of them, one of them, the Axi Bridge was a multi, like they compromised the keys of multi-sig in the hard way. I should say that, yeah, okay, fine. Multi-sig compromised the keys, but in terms of like your trust model, our trust model, this optimistic trust model, that hasn't been the source of those hacks yet. Right, yeah. So in terms of value lost, it's been key compromises, and then in terms of number of hacks, it's been smart contract vulnerabilities. Interesting, okay. And so with that, you know, it kind of, like we're taking a little bit of a digression here, but I'm curious. So do you think the bridge space is like fundamentally a cooperative space, or do you think it's very much a winner takes all space in your case? And what would the winner look like? I mean, I guess if you knew you would already be building it, or maybe you are building it, but like fundamentally it feels like to me, that a bunch of people are building bridges right now, but really like kind of, you know, if you build a good bridge, there's the best bridge, and that's kind of it. Like it feels like there's like a metric by which we can measure bridges and be like, this is the best damn thing possible, anything else sucks. We're just gonna use that. Wait, I don't know that there is a single metric, you can look at bridges on, like let's talk about asset bridges for a second here, where you want, like you want three things, you want it to be fast, you want it to be cheap, and you want it to be secure. And so I want those three things, give me max of all of those. Sure. And so I think that it's not necessarily clear that that's going, and maybe you want permissionlessness too, add that in there, maybe that's insecurity. So like, look, I think on the asset bridging, it's going to be very price competitive. Asset bridging, the kind of the model here is fees, how much fees are you taking? Well, maintaining a minimum level of security, and that minimum level of security should be like really fucking high, excuse my language. I think it's okay. Yeah, trustless and all that. Yeah, thanks, we're all adults here. But so I think that's kind of interesting. Like, and I think Chris, can talk about this, like asset bridges, it's going to be very fee competitive. It's great for the consumer, by the way. Right. But then I think the other thing we can go into is talking about like data bridges. I find it really interesting how you even charge for that, or like what the fee model should be. And I also don't think that's at all figured out yet, either. But maybe, go guys. Yeah, I agree. You know, for an end user, the risk for a bridge is at least for a liquidity bridge. The risk is actually really, really low. You're only exposed to a bridge for the brief period that you're crossing the bridge, and then you're out. And so, you know, in terms of security, it's really, or in terms of cost, you know, it comes down to like how secure you are and how much you're gonna have to pay for liquidity. And the more secure you are, the less you'll have to pay for your security and the cheaper it is gonna be for end users. But end users don't need to think too much about the security of the bridge because they have such limited exposure. Yeah, yeah. Yeah, I think for users, like end user token bridges have slightly different desired properties than for example, a data bridge that's maybe transmitting like governance results or like other actions on, you know, Ethereum broadcast out or vice versa. So I actually don't think it is like one number to optimize that function. That depends on like who your user is. Like, if your user's a consumer, they're gonna care about different things versus like a DAO versus, you know, whoever else it's gonna be. So I guess, well, I just wanna add one other thing around this. So like your question was, is there like a winner takes all market here? And on one other concept that I think is worth floating here is this idea of around capital efficiency where if say for example, your bridge had required $20 million of TVL but could do like a billion dollars of volume a day and you were able to charge like a basis point on that. Even if that $20 million gets hacked or whatever, there's probably still a reasonable model in there like business model where the profit of the bridge, it was worth the risk of that capital. And so again, I think the space here where I think the bridge space has kind of gone in the wrong direction is like locking up lots and lots of dollars which is for well also not making that much revenue. And like that's got the ratio the wrong way around, right? And so a winner take all thing, I don't actually think this is a winner takes all market. On the asset side, I think that being able to do a low volume thing with minimal TVL like lots of capital efficiency, I feel like that will be competitive. Maybe not again, winner takes all. And then on the data side, again, I think some of the ZK stuff that Emma's working on gets more and more compelling and interesting, but I'm also, I don't know, do you think there would be a winner take all on the data side? Well, I think of course I'm a snark maxi and I want us to, I want all data transfer to go through the succinct like clients, the succinct message bridge, whether that'll happen, I think it's hard to reason about. I do think one thing I will say is I think this snark based approach for succinct like clients and proof of consensus is kind of going to be the end design of like a data bridge, especially where if you, for example, if you like don't necessarily care a ton about latency, like I don't, you know, and maybe you can layer a liquidity network on top of it eventually, but I think the snark based approach is technically feasible. Like it works today, we just need to do some more engineering and do implement proof of consensus for all the chains and just implement it, but it works today and it's, you know, I think it is like the end design, like I don't necessarily think you can do much better in terms of a trust model. Yeah, I think we'll probably see a power law distribution like a lot of other markets. And the other thing we haven't talked about is, you know, there's a lot of assets out there and it's hard for liquidity bridges to support long tail assets because you don't want to just like maintain the liquidity for an asset that no one's using to bridge. And then there's also going to be longer tail networks in terms of what's supported. So, you know, we'll probably see, you know, bridges that are kind of further down that power law that they'll probably attack the longer tail assets, the longer tail networks, where they can gain market share and the bridges that are kind of at the top, they're going to be focused more on security and making sure that their LPs don't need to be paid a ton to keep you efficient. I guess I'll bring it back really quickly to the original question. The reason I ask this is, you know, in a lot of ways, what are you paying for when you are like, you know, giving a fee to somebody? Is you're paying partially for the fact that like that thing cause a risk, right? So in some sense, like security and economic efficiency are like highly correlated. So you can think of them almost as like one single number, right? Like I don't think you actually do better by being less secure in terms of economic efficiency or at least not a lot better, right? So this is where the question comes in, which is, you know, in some sense, right, if you have like the maximally secure bridge, like it's very likely that it's also the most economically efficient bridge. And if not, I mean like maybe not, is that the case? Like do you think that there could be like, you could like somehow lose security but still have people not pay very much for like the privilege of like, you know, bridging their assets? Yes, I think there's actually three costs. So there's gas costs, there's time value money, and then there's the risk that money is taking beyond just, you know, a riskless fee. Yeah, so I'm gonna make the simplifying assumption here that gas costs are, you know, with magical scaling results going to zero. So let's pretend on our magical fairy world that scaling as we've seen like 70 talks on are all perfect and beautiful and we get like approximately zero or negligible costs. That would be great for the snark based bridge. That would be great in a lot of ways. We'll be waiting for that future. Yeah. And so it actually becomes a really complicated question. Like I don't think we could, you know, get all the way to the bottom of capital efficiency here on stage. Really? Wow. Because, you know, both across and hop have both active liquidity and passive liquidity. There's different challenge periods. So that gives, you know, different trade-offs between, you know, what these different liquidity providers are kind of taking on. And then, yeah, there's also, you know, different, just different models in terms of, you know, how things are bridged. So, you know, with hop, you know, you can kind of think, we kind of think of capital efficiency in like three buckets. So we have like the active liquidity provider that cycles every 24 hours. We keep very long challenge period because we think it's important to have like a human response if things go wrong. Like your team needs to be notified, you need boots on the ground and, you know, be able to know that if your infrastructure's down you can have your engineers get it back up and protect your bridge. And then there's the passive liquidity providers. And so hop uses this AMM model that we haven't talked about yet. And so we have this like intermediary accounting asset and we're able to kind of shorten the challenge period from an optimistic roll up from seven days to one day. And so that's our 24 hour challenge period. And so you can kind of think of the AMM liquidity as just taking on the seven day capital lockup only for the net flow, not the total flow. Right. This is what Uma was mentioning as well. Right. In some sense you can think about it as the derivative of the flow as opposed to like, you know, the integral overall of it, but sorry, okay. Yeah. And so that's partially the AMMs but also partially, you know, arbitrageers in the AMMs are really that's where they're taking on the seven day of the net flow. And then the AMMs themselves, that kind of scales to transfer size but not transfer volume. So, you know, we could have a bridge that has two AMMs that have 10 million TVL each. And, you know, that's gonna be able to support pretty large transfers very efficiently, but that can scale up to virtually any volume as long as the transfer sizes are, you know, below what a $10 million pool could handle. So I guess for just for a little bit more context. So here the automated market maker is acting as a way of, so if I would, let's say I don't know, I have ETH and I wanna get like whatever WBTC on some layer two, I can instead of kind of bridging ETH and then swapping it, I can just directly kind of like bridge swap in, is that correct? Or... Yeah. Okay. So, I'm still unclear. Also, I guess before we kind of get to that, for context we're talking about, so there's two layers here that we have, right? One of them is like the data layer, which is kind of what we were talking about previously and what security models we have. And there's kind of like the economic layer where we're talking about economic officially that kind of is built on top of like a messaging layer that you can trust, right? And I wanna keep these two things separate. So now we're, right now we're talking about like the thing built on top of like how do you transfer assets correctly and how do you like ensure that you can verify that these assets have been put into one side and whatever. So let's, while we're on that specific topic, afterwards we'll get to the messaging part which I think Uma has some other spicy takes on. But okay, so, but anyway, so sorry, describe this AMM design or like what is the point of this automated market maker I guess at the end of the day? Sure, so the main purpose is actually to one kind of price liquidity flows between different networks. And then it also, there's no way if you're just using the canonical assets to not take on that full seven day lockup for the total flow of assets. And so because we have, we basically have this intermediary accounting asset. So it's like H token. Got it, got it. So you can essentially pay to reduce the risk. So like let's say you're waiting 24 hours, I can instead be like I'm gonna swap this on some exchange, right? Which lets me actually have the native asset and someone else can take on the risk. Basically we can exit these H tokens in 24 hours versus the canonical assets that are exited in seven days. And so it's like H ETH is the pair. And it's basically a market between the 24 hour exit time and the seven day exit time. Got it, got it, got it. So it essentially prices the risk that you are taking on by waiting whatever seven days to interrupt the asset versus 24 hours. Yeah, as well as demand flows between the two. That's right, that's right. Yeah, the thing I'm just gonna add is maybe zoom out a little bit. Like our take across this take is that this is a financial engineering problem. Like if you solve the data and solve the data messaging, you have a financial engineering problem. And I think the more innovative you can be with your financial engineering, the more capital efficient you can be, da-da-da-da-da, right? And so again, hop in across share a lot of philosophies. The one thing that we've done that is quite different is we don't have these AMMs at all. Instead, we are actually just having this single unified liquidity pool and we're using the native bridges to kind of rebalance assets between this hub and spoke model. And if I'm going to like, shill our product a little bit, the one thing that I kind of like about this is I think that that liquidity model allows us to be a bit more capital efficient than having AMM pools that have to be kind of rebalanced off-chain. Yeah. So like I said, the AMMs, it's only based on transfer size. They just need to scale to transfer size, not transfer volume. So even if we were to have just like $2 million in each AMM, we could support unlimited volume as long as the transfer sizes are below that threshold and arbitrageers are coming in and arbing that out. And balanced. You need like... Yeah. I'm going to push back a little bit because you need the flows to go both directions, right? And if I keep on pushing one direction, you're going to run out of liquidity or make it inordinately expensive. And that's why I say that hops, cost of capital is for the arbitrageers, it's just seven days of the net flow. So if there's a perfectly balanced flow between arbitrage and optimism, we don't need arbitrageers at all. Okay. Yeah. And it's like this concepts again, call it like basic financial engineering concepts or like market making concepts is if you can cross flow, like if you got a million dollars going this way and you had a million dollars going the other way, it's beautiful. You're fine. Not only that, but it means you have like really low costs. You don't have to rebalance things. Just did it. It all works out beautifully. And so again, I think a lot of the financial engineering is how can you get, how can you incentivize net flows and or have a low cost when there are imbalanced flows, have the lowest cost of capital. And there's lots of like fun financial engineering tricks in that, I think. But let's maybe go to... Yeah, I was gonna say. So next up is, you know, fine. So let's go back. So when we've taken this first part, right? Let's like zoom out and go to the second part, which is honestly what underpins the entire financial infrastructure you guys are talking about. And it is, okay. So now I ask you the question of, do you think that data bridges will themselves all kind of all go into like one single data bridge, right? Like why not, you know, kind of what, what doesn't this become finite? Like just like standard infrastructure that's open source. That's just like classic, normal, everyone uses it. How does one even monetize this thing? You know, for example, IBC works pretty damn great, right? And like the idea would be IBC for like Ethereum at Al would be very cool. And obviously that's kind of what you're working on. But how does one monetize that? I mean, IBC is just, I just implement the ping. And then they're cool. Yeah, it's a standard. Yeah, I think what we're building at Sysync is going to be like open source. And we view it as like a public good because it is like a public good for all these ecosystems to have these Sysync like clients and have this like snark proof of consensus for the consensus algorithms. And honestly, it goes beyond just having a bridge. Like you can imagine this actually being really useful for like clients and wallets so that you don't have to connect to a centralized RPC and you can like do something peer to peer, you know, something further down the line. In my opinion, wallets have a lot of other things they need to do before that issue. But yeah, down the line, I think the Sysync like clients are important for things even beyond bridges. I think, yeah, that being said, so I think we want to build this in a very public goods oriented way. But I think, you know, just because something's a public good doesn't mean that, oh, it has to make no money. Like for example, operating these Sysync like clients is really hard. So, you know, and believe me, like I've had to write the infrastructure to, you know, watch one chain, generate the ZK proofs and then set up the infrastructure to ping a bunch of different like clients on a bunch of different chains. And this is not infrastructure that, you know, in a timely manner that, you know, people are going to want to run themselves. Like probably there's going to be an operator, I mean, currently us that does this and then has guarantees around it. Like, okay, if you're going to rely on our light client, you probably want to, you know, know that it's going to have like a 99.999% uptime. And so I think I really view this as like, where what we're building is a public good, of course it's going to be open source, of course it's going to be audited. The more people that can contribute to it and like we can all make sure it's like canonical infrastructure that's really secure and everyone feels really good about. But then operationally, it's kind of similar to like an open core company where if you have something, I don't know, like in the web two world like MongoDB where MongoDB is open source, but you know, Mongo the company runs the open source infrastructure. And so I think a similar business model like could make a lot of sense here where you're basically paying for the convenience and like the peace of mind of like, you know, this infrastructure is actually going to work and it actually going to be uptime. That being said, we're not going to always be like our own operator. Like we don't want to be the only operator for like censorship resistance. And so it's important that anyone can actually run the operator themselves. And yeah, I think there's like some interesting things you can like do around that in the future. Is it possible to tell us, but how does one decentralize this? I mean, it feels hard, right? Like you're kind of relying on someone to hurt their poor these things. I mean, just tell me if I'm wrong, but anyone could do this. It's just the sense that like, this is where no one's going to. Yeah. All right, other than if you were in some. Yeah, it's like we're going to coordinate all this infrastructure and then, you know, and we're going to do it the best. So you have like the light client that updates the fastest and like you can have like a nice dashboard that's like, yes, my light client got updated, things like that. And then in the future, I think a lot of the ZK EVMs have been thinking about really similar problems where they really need proofs to be generated, but they don't want a centralized prover. And so they're going to have like a decentralized prover network, like that's separate from their sequencer. And I think that's really similar thing could also apply here where you're going to have like a bunch of provers that, you know, basically I kind of think of it as there's this economic exchange going on where there's people who are relying on the succinct light clients who want proofs and then there's people who are able to provide proofs and there's like some exchange of value and then maybe you have like a marketplace and you like take some cut of that or something. So I guess, you know, kind of more generally like let's say I want to prove something that happens only once every once in a while, right? Like what incentivizes me from like paying, you know, this like big pool that feels like what a lot of things, like how a lot of things work, right? Like we, you know, kind of only bridge or only like send data across kind of chains, like very fairly infrequently, right? So is the assumption that there's going to be like a large enough volume that like people are, you know, kind of incentivized to always run these things or do we kind of expect every individual user to just like provide their own proofs as needed? I don't know how large these proofs even are. I mean, how long does it take? Like, yeah, I think expecting individual users to generate these proofs is totally infeasible. Like I think to make it, you know, timely, you need like custom hardware probably in like the limit. Generating the proof takes a while. How long and in what machine? Sorry. So currently for Ethereum, our proof of consensus for Ethereum takes like four minutes to generate on like a pretty beefy AWS machine. I think that over time, that time it takes to generate the proof will definitely trend down, hopefully closer to zero, but you need like GPU or there's a million ZK hardware companies that I'm sure will use and that's gonna make it better. But again, yeah, a user's not gonna run a ZK FPGA in their house. Like that doesn't make sense. Guillermo will. Yeah, I don't know. Gotta minimize that trust, you know. Truly minimize it. But yeah, the beauty, the beautiful thing is if I'm generating the proof, it doesn't matter, you're not trusting me. I'm just generating the proof for you, but the chain is verifying the proof. So there's no trust assumption if someone else is generating the proof. It's just a sense. And the important thing for censorship resistance is that you could generate the proof. You could spend 20 minutes and spin up an AWS machine and generate the proof yourself and send it to the light client. You're generally not gonna do that and you're probably willing to pay for the convenience of someone else doing it. Cool. I guess we are like almost getting down to time unless there are any last burning thoughts. I think I will turn it over to the audience to see if there are any questions. Are there any last burning thoughts, ideas, notions, constructions? Let's get some questions. All right, let's open up some questions. I think I saw you first, but... Or you can shout and we'll repeat them too. Hey. Yes, I have a question for Uma. First one is like, what is the time gap for you like to propagate the blocks? Like from my understanding, like it's working similar to like Rainbow Bridge, but like with the case. So like you should have like some like gaps like you cannot propagate every Gnosis chain block to Ethereum and like basically like any chain to Ethereum you cannot propagate like every block. And like the second question is like, how much does it cost to like maintain the like, relay or like a validator? Yeah, so the question kind of was, how long does it take to propagate a block and presumably you're not propagating every block? Yeah, that's right. So for us, like the block propagation time is first we need to wait for Ethereum to finalize. So that generally takes an expectation around two epochs, which is like around 12 minutes. And then on top of that, we have our proof generation time for the particular block. So that takes another fourish minutes right now, but I think that can go down quite significantly to hopefully it's someday on the order of seconds. So hopefully that should be no issue. And you can probably do things around not having to wait for a finality that I won't go into now, but I think in the future, that's also possible. With proof of stake, you don't need to propagate every block. So in proof of work, you actually do because the longest chain is this like sequential thing. So you need to verify every single block to verify the chain is the longest. But proof of stake, as long as the validator set stays the same, you can just send an arbitrary block as long as you have the same validator set. And so for us, we like send a block, you know, currently we do it every 10 minutes or something like that. Just, you know, and so there's like an extra 10 minute delay on your bridge basically. And so there's like a trade-off between how often you wanna propagate blocks and the gas cost you're willing to tolerate. But if you have like a huge transfer of an example that you wanna do right now, you could like send the block right now. Yeah, that's doing some of the stuff earlier offline, but basically it's like 12 to 16 minutes, you can send blocks and it costs you like 180K on the destination. Yeah, yeah, verifying a proof. So verifying a proof, there's like pre-compiles for pairings. And so verifying a proof takes like 200K-ish gas. So you have like 200K-ish gas to verify a proof. And then you have like whatever smart contract logic that should be quite minimal cause you generally try to stuff it in the snark to keep the light client up to date. And then there's some really, really fun things you can do with the recursive snarks. So say I'm bridging from noses chain, polygon and you know, whatever other chain. And I have proofs of consensus for all of them. What I can do, we don't do this right now, but what we're working on is if you have a proof of consensus for three different chains, you can actually recursively prove them in one snark. And so what happens is you're amortizing the gas costs across all your partners that you're bridging across. And so that's really nice because with recursive snarks you can stuff basically an unbounded amount of computation into 180K gas. And so I think in the future, when everyone starts using this, then actually it becomes gas efficient enough. You're talking about the proofs, proof generator for example for Ethereum to another network, but it's quite a complicated task, right? To generate state proof for Ethereum and verify this proof, do you have some results in terms of CPU cost or gas cost for your bridge? Because I know it's really complicated and it takes a really long, long time. Yeah, I think what I said earlier, our proof generation currently takes four minutes. So it's on a pretty beefy CPU and then the gas cost to verify the proof. Because it's a succinct proof, no matter how much computation you stuff in there, it's always going to be around 200K gas. Guys, Brian, three quick rings. Guillermo pretty timely comments on IBC. You should check Twitter after this. It's like a grim day for IBC right now. But two other questions. One, first, there's a lot of new, you know, everybody has some new validation method that seems awesome and like there's a bunch of abstract stuff happening here. It's new, it's better, it's great. But the majority of almost every system that exists right now still has upgradable contracts where the mutability of that contract basically underlies the security of the system. So do you all agree that most of that reduces down to like the trust assumption of the multi-sig that's controlling the upgradable contract? And the second question was for Uma. In a world where like all of the hard, technical surfaces of recursive, state-driven, you know, all of this is perfect and fine, is there still this issue of like, especially when you're expanding to like very long tail stuff that validator sets actually have clear incentives to be adversarial to each other? Like there's no reason that validator set shouldn't sign or generate a forged block because like imagine competitors, arbitrage monoptimism, BNB and polygon or whatever, it's actually very strongly in their interest to hurt the other chain and yet they have no economic incentive tied to let's say a block of a forked chain that they're presenting as a valid proof. Yeah, those are my questions. Let's do the first one first. So I, Brian, I think we're philosophically aligned like upgradable contracts, bad, right? And I think there are things you can do where maybe there's like an upgrade path, maybe kind of, but generally speaking, no, no, no, no, no, like make immutable things and that does make your upgrade path very painful, right? A cross had V1 and V2 and it sucked. It really sucked going from V1 to V2 but like you did it and it works. And but like again, like, okay, I don't want to shit on chain link because that gets me in trouble but like there's still a multi-sig behind all this stuff too, right? And so there are the thing we also have to realize is like in-define and crypto there are these very scary multi-sigs that are sitting around that could do very bad things and we, I think like Ooma's spicy take is multi-sigs are bad and Chris, same, like just those are bad. Like delete multi-sigs should be the MO. Yeah, I agree. Hop is also not upgradable. We take the kind of uniswap approach of launching a new version and everyone can still use the old version and it's gonna exist forever and yeah. And then yeah, Ooma, you wanna go on? Well, you can insult multi-sigs too. Yeah, I also think multi-sigs are bad. Yeah, huge props to Hop and across for not having a upgradable contract, that's huge. I think for the ZK stuff, since it's all like very new technology, honestly for a while, they're arguing to have to be appropriate guardrails and I think like a lot of the ZK EVM teams are thinking through a lot of the problems or like potential solutions, including time lock upgrade where you know you can with emergency withdraw and things like that. So I do think it is important to have a balance of, not upgradable is the goal, but when you're working with really new technology, you definitely do have to keep the guardrails in mind. So I think that's pretty important. I think... It's definitely a trade-off. Yeah, yeah. And it could be a little bit more nuanced than that too. Like a guard, there's designs I think that exist where it's not upgradable, but there can be like a way to... Like a stop button or something. Like yeah, like an emergency shut-off type thing and I think patterns like that are not crazy. Yeah, yeah, totally. And then yeah, in terms of like the long tail of this stuff. Okay, I think yeah, that's a really interesting question. So in the long, long term, if you have a ZK validity proof, so right now proof of consensus only proves like header validity. But you know, with ZK EVM, you could even have state transition validity. And then it doesn't matter if the validator set necessarily goes rogue, because well, the validator set can go rogue and sign a totally invalid block, but then if you verify validity proof of the state transition, then you're not going to accept it. I mean, that's very far off in the future. And then if they have like slashing conditions on their own chain, then they also are subject to that. I think if you're trusting, if you're using a long tail chain and you're trusting their validator set to not do malicious things, like I think if the validator set signs off on some totally incorrect header for their bridge to hurt a competitor, I think ultimately they're actually hurting their users a lot more. And so to me, it seems like just a bad strategy for them to do that. Like I don't necessarily know if they're super incentivized to do that, but I do think it's important. Like I think for us, like that's why we are focusing our initial thing we built is like for Ethereum where it is actually extremely decentralized. They have a ton of economic security. If you validate Ethereum's validators, they're not going to go rogue in this way. So I think like that's kind of our initial starting point of focus. And again, for users of bridges, it's always important to remember, like if you're bridging between chain A and chain B, your security is the minimum of the chains you're bridging between. And you can't get around that no matter how secure your bridge is. Like if you even have proof of consensus and validity proof. So yeah, I think that's always important to remember. My question is, do you think the user especially for the liquidity bridge are pricing correctly the risk that the bridge has? And if no, like what can we do to better educate them? I think if you're a liquidity provider and you're taking on the risk of every single bridge that that, or sorry, every single network that that bridge supports, you are an incredibly altruistic person. But to your question, I think the users here, again, to Chris's earlier point, the users here are not the ones at risk. The liquidity providers do have risk and pricing that I think is something that is like hard. You know, again, the lindiness is I think important here too, but it's hard and again, you go back and even what Chris says, like, okay, some of these rollups are pretty new technologies, right? Even using the rollup, not even being an LP has some not zero and potentially relatively significant risk. So like, what does it mean to put, you know, 10 million bucks of ETH onto a rollup that still might not fully have fraud proofs implemented and all that kind of stuff? Like there, we are, and that's not an insult. I'm just saying that these are still cutting edge technologies. Yeah, also under discussed, rollups also have smart contract risk, by the way. Like people, you know, say bridges are so terrible, they've got all these hacks, all these, been all these smart contract hacks and rollups have the exact same problems, like the recent finance hack, which was like a Merkle, I actually don't know all the technical details, but from my understanding, it's a Merkle inclusion proof hack. Like you could have the same thing in a rollup native bridge too, and so that's not to be under counted. Absolutely, and there have been things reported, not exploited. Hey, that was awesome, and I hope this is not too open-ended of a question, but do you have any comments on shared validator sets, like are kind of being introduced to Cosmos or in Polkadot and how that might affect the architecture of like the proofs that might need to be generated for a bridge? I, yeah, I'm not super familiar with this like shared validator set thing. I'd like heard a little bit about it, I should probably read more. I think as long as, you know, I think at the highest level, basically you can think of like, how would a node validate that, you know, a particular chain has come to consensus, even if they have a shared validator set. And then ultimately you're taking whatever computation like a node would do, and then you're putting that in a snark. So there's no like fundamental inability to do this even in the, even if you have a shared validator set, like you can always do this. I mean, okay, it might be a little harder to keep track of the validator set if it's like rotating across like three different chains or something like that. So I'd have to like look more into the details. And actually honestly the details here are often what really get you, but theoretically I don't think there should be any like trouble with it. Yeah, I think just the way I would reason about this, and again, it's not, there's a lot of new stuff here and a lot to wrap your head around. But the way I'd reason about this is like, I don't know if it was Uma or Guillermo's earlier point, it's the minimal like, what's the lowest common denominator security you have here? And so, you know, in our like optimistic design, all you're doing is saying, hey, this happened on this other thing, you have to trust the initial source. And if your shared validator set is better, great, if it's worse, great, like you kind of just kind of be sort of dumb about it and be like, are you trusting the consensus that this thing has reached? I don't think I know enough about shared validator sets to give a good answer. All right, well, with that last question, thank you panelists, everyone, please give them a round of applause.