 So we're gonna have a few individuals kind of doing lightning talks throughout the day. We'll take a break at lunch and then we'll have more opportunities for like fishbowl panels for people to come talk about designing for adoption, designing for decentralization and so forth. So personally, I've been quite interested in looking at how we can start designing for adoption but not compromising when it comes to decentralization in the ecosystem, which I think we do quite often. So it'll be interesting to kind of get the feedback from individuals that are working on projects like status and Rachel has been doing some stuff on ZK for public goods, Rahul's from Reddit. So we have quite a few different interesting projects. Rob will be talking about remix and if there's the ease of use for like web three technologies and developer resources is actually a risk factor. We'll have some people from a lot of the client teams also coming because that's the area where I kind of sit and work in and I'm quite interested to see what challenges they're kind of facing as far as adoption for people staking, people running nodes and making sure that there is decentralization in the ecosystem being taken in account when we do create these resources for individuals. Additionally, we have from Optimism Kelvin talking about risk framework for all two bridges. So that's quite a huge space that we've kind of not really paid attention to at this point as these layer two start to come in, what do you X around those things really look like and then we'll have a few more client teams kind of talk about the integration and standardization around UX and front ends for those things. Additionally, we'll have a few opportunities for people to just kind of like get on stage and riff on anything that they kind of feel important to the ecosystem. So hopefully as more people struggle in throughout the day we get a little bit more of an energy of a collaborative nature. So if there are certain empty chairs, et cetera kind of like coalescer on certain tables for now. So you guys have a little bit more of an integration and communication with the individuals that are sitting next to you. So I'd say like congregate on the tables that are somewhat empty and then we can kind of scale from there as people come in. I'd like to invite Rachel up to kind of just riff a little bit about ZK and designing for public goods and we'll take it from there. Thank you. My name is Rachel. I'm at the Ethereum Foundation on the Privacy and Scaling Explorations team. So our team started out with a bunch of developers who were creating like cryptographic primitives or like programmable cryptography, a lot of zero knowledge protocols. So that would allow people to prove that something's true without revealing maybe their identity or everything about themselves. And I came into the team after they had existed for several years. I was the first designer on the team. There are four of us now. But it's like the past two years have been really exploratory and experimental trying to figure out how does design fit into this space. When I got there, my team lead Barry White hat had a bunch of project ideas in mind of like, here are some concepts for how we could apply some of these ZK protocols to applications. So there were like these proof of concept applications that he had visions for. And that's where we started. So we were working on ZK Opera's private wallet, Unirep social. It's like an anonymous social media platform. And those were like the first experiments. How do we put an interface to this complex code? So what I've learned through talking to the developers that I work with is that the type of code that they're writing, I don't have any development background, so I'm sorry if I butchered some of the language. But the type of code that they're writing is strange for a traditional development background. And there's some reframing that has to happen and logic that has to change when they're writing the code. So that's why they refer to it as programmable cryptography. How can you take this abstract math and turn it into code that has the function to execute certain things? And then how do you create designs on top of that? So someone asked me. I've been asked a lot this week. If you don't have any development experience, you have little knowledge about cryptography. How are you designing for these projects? And what it comes down to is not being afraid to ask questions, not being afraid to ask for clarity. And what I've learned is that when I'm in these sessions with the developers that I work with asking questions, I've gotten feedback that for them it is also super clarifying for the work that they are doing. So it's like design becomes more of this facilitation of conversation role. And the two places that we're primarily doing that are for our protocols, helping people understand the value of the ZK protocols, like what they can do and what use cases you might apply them to. And then also creating proof of concept applications that make use of those protocols. So the reason we're focusing on those two areas is because our audience is other product developers. So I like to think of it as our audience being product teams. So developers, designers, product owners, writers, anyone who wants to take a ZK protocol and turn it into a usable application. So when I started, I had this misconception that I was designing for end users of products that would give people privacy and anonymity. But the more I'm on the team, the more I realize it's like we're doing research for other builders, for other creatives because all of our work is open source. Everything we're making is a public good and should be for the community to use, to work from, and build off of. So the real value of the proof of concept applications is not this product that allows people to anonymously chat. It's, for example, an anonymous chat application that shows other builders, this is possible. You can build this thing. And here is just one example of how we've built it. The other important thing, like going back to the protocol conversation, the protocols are kind of like rules. These are the functions that you can execute with this code. So at first, a lot of our protocols just had documentation online that was there for the curious people to read and interpret and try to build with. And what we're learning is that's not enough. Protocols can also have websites. I was super inspired by Lens Protocol because their website has here's use cases for how you could use the protocol. And here's people who are already using it. And that's what we need to do with all of the protocols that we're working on. We can't just expect people to take something technical and riff off of that. So just a quick question around that. Why do you think it is in this ecosystem? We're building technology that doesn't have a clear use case. And then you're pushing the use cases on individuals that might be able to use them. And also the other thing that you mentioned was that you guys currently at the team don't have any end users, but you're developing these prototypes for these teams to get an idea of how to utilize technology. So how do you make design decisions when you don't have a clear audience from the user perspective and then from a clear audience from the perspective of these are the companies that might be benefiting from it? So how do you navigate that challenge or back and forth? I'll start with that question because I forget the first question. OK, I'll come back to it. So yeah, audience, that's a big question. Who is our audience? So when I started on the team, I was working for a little bit and being like, wait, we're assuming that people want this. We're assuming needs. How do we even have proof that people want this type of technology? And I was kind of upset. We're starting backwards. This isn't how design works. But I think that what I've learned is that it's OK to start where we are, to start with someone who has an idea of this is how this code could be applied to an application. You need to start with someone who understands that deeply, and we just need to try it and make the proof of concept, test it internally, test it with people in the ecosystem who are interested, and we're getting insights from that. And also, as designers, we're actually learning more about the protocols and how they work. And I think that it's not what I learned how the design process is supposed to work, but it's working for us because we can see ourselves learning more and understanding the content more deeply. And we're at a point now after only a year and a half where more people on the team are starting to have ideas for new projects that we could work on and new ways to apply the protocols. And it's not coming from just a few people. So I kind of think that we have to be able to communicate the value in order to get to a point where we're having more conversations with people who are saying, this is what I want to prove without exposing my identity, for example. It is a bit restrictive even then that you're now still getting strong enough signal from these projects that would benefit from it, their exact use cases and how the protocol might need to be adjusted according to that because it is kind of coming from the backward approach of the protocol is being developed without having a clear use case. So let's say Status is developing a messaging protocol to have a clear use case as far as how to implement that because they know what they want to build on the product site whereas you guys are building this bottom layer without having clear context to that. Are there any learnings you guys have learned from teams or feedback that you're getting from people that do want to implement some of this stuff? Yeah, all the time. Like, I think that the community really challenges us. I mean, I've been sitting in on talks and going to our community hub that we have downstairs, the temporary anonymous zone, and we're constantly getting questioned like, wait, so you're only able to do XYZ with this protocol? Like, isn't that limiting? And that makes us reflect on what we're doing and see where we can improve. Thank you very much. Rachel. All right, good morning, everyone. It's fun that we all are hungover together from Rave and now we are here trying to design beautiful systems so that more of us can go to Rave and then come back. And actually, that's a good point. How many of us feel safe enough to be hungover and still use like a crypto wallet to do some weird transaction? For example, I probably wouldn't because I know I'll mess up a lot, right? Over the last year, I have had the pleasure of building and maintaining a crypto wallet that has been used by over 3 million users for now. I'd read it where I walk and I prepared some notes about how I think wallets are today and what wallets could be tomorrow, not should but could. So today all the wallets kind of look the same, right? If you have a web wallet or mobile wallet, whatever, you're asked to set a password and then you're told to save your recovery key somewhere and then if it's a mobile wallet, you're given an option to back up in your iCloud or Keychain or whatever. And if it's a web wallet, we kind of just store it on a local storage encrypted with your password kind of thing. And all of us here know how many hacks that has caused and all the things that are potentially not so great about it, right? So like when I first thought that within the crypto space, I actually think this password and stuff, everyone kind of gets it, like we all get it, right? The password is not the recovery phrase, just because you save your password doesn't mean you're kind of safe, like you still need your recovery phrase and if you forget your password, it's okay if you have your recovery phrase. Unfortunately, this paradigm is flipped over from the Web2 model, when the Web2 model, the only thing you care about is the password. As long as you have a password, you're safe. If you forget your password, that's fine. There are some ways to recover them or reset them in some way or the other, right? Which is very interesting. Since like 1990s, every Web2 security engineer will tell you don't write your password anywhere, don't put it as posted notes on your laptop and here we are, since 2014, we've been working in Ethereum and what do we tell our users? Save your recovery phrase, don't write it in one place, write it with pen and paper, write in five different places. If you're like paranoid enough, you should put in like bank account lockers, fly halfway across the world, just to recover your keys if you have like, if you're like some kind of a degen or whatever, which is completely, completely, completely different to what we do in Web2, right? So it's kind of weird for us to expect Web2 users to do the same thing, right? And if you've been around in DevCon for the last three days, or if you've been hanging around in the world of account obstruction, you know that there are like three really hyped up EIPs? Like one is this 4337, which requires you to, which, and it will see to have the smart contract wallet without any protocol changes. You also have Argent that's building the smart contract wallet, which is amazing because you don't now need to care about your private keys. Well, you kind of do, but there are ways to go about it, right? And then there are like these other older EIPs. I'm forgetting the numbers. I think it's 2918 or something. And some of them like kind of asked about protocol changes, right? My hot take here is 4337, no matter how hyped up it is, it's actually not the best idea for multiple reasons. One is it adds insane amounts of complexity. So if anyone has been like hanging around the 4337 world, what they do is you have a smart contract wallet and then you have this other private mempool other than the Ethereum mempool that we all use, know and love and get drugged on. So we have this private mempool, you do your transactions, you send to the private mempool, then there are these bundlers who bundle all these transactions, kind of like flashbots, I'm oversimplifying a bit. And then you take these transactions, put it at the top of the block, right? And we all have seen about these and that causes like insane amount of more censorship resistant issues. It will probably cause like a ton of other issues if not done properly. And it's still like for a normal user, now they have like two weird hexadecimal strings that they think about. One is their actual account like their EOA in the technical sense. And one is a smart contract wallet. And now when they go on ethoscan, they don't wanna check their actual address, they have to check their smart contract address kind of thing. So this is like a bit messed up. Instead of trying to simplify things, we are adding another layer of complexity and stuff, right? And I've been thinking a lot about these things and the best thing is that there is no good answer, which is why I love this space, right? Because we need researchers and academia and non-academia, we need UX people, we need like engineers, but we need all of them to come together and like walk on UX together. We don't just want like designers to do their thing and then engineers to do their thing and then academics to do their own thing because then none of them actually think about things together, right? So one of the interesting things I've been thinking about is which sadly I don't think it's possible at large scale today is things called multi-party competition. Anyone here is familiar with the idea by any chance? Yeah, okay, so for those who don't really quickly, the idea is instead of, so currently you have this private key and then this one private key does this execution on its own to do the signing of the transaction and then sends it to the great mem pool that we all know and love. But in multi-party competition, you have bunch of parties or people or clients that come together to do this transaction together. So for example, you can think about your private key being divided into three different parts and then every time you do a transaction you could do like a two of three of those private keys coming together to do a transaction, right? And then the most interesting thing, so I was talking with someone who works at Web 3 Auth today, earlier, I think yesterday. And the most interesting thing about this is like, so one of the keys is stored in the laptop or like a recovery device that you kind of don't use, it's kind of like your call wallet. Then your second key is stored on your phone, so it's a mobile wallet. And then the third key is, can be used by login with Google or login with Facebook or something. But they don't actually store that key in Google servers, obviously. So they have this other, they have this really interesting idea where they have something akin to a data availability committee of sorts. They have a committee that talks to these providers like Google and Facebook and Reddit and Twitter who then store the private key there but you need like this OAuth login token from Google to actually execute the transactions and stuff, right? Still quite better. We have like now you have, you can lose one of the keys and it's fine. You could forget like your Google password or Facebook password and you could use normal login systems and normal reset password techniques and stuff, right? And then you go like, okay, Rahul, this is very cool. Will this work today? It does kind of work. Coinbase wallet has something similar but instead of dividing into three parts, you have two parts kind of thing. But if you ever use it, you realize it's a bit slower than normal wallets, obviously because, I mean, you can imagine there are like three keys. They come together. You have like a bunch of like OAuth systems coming in and then you have these really, really interesting cryptographic algorithms that come together to do the signature and then you have to validate the signature and then combine it into a normal signature that Ethereum or L1 or L2s can understand that kind of thing. So maybe what we really need are like two kinds of wallets, right? One is the kind of wallets used by people who are familiar with crypto. And these are also the people. Funny enough, these are also the people who at some point may have gotten hacked but they still love crypto somewhere that they still keep using it, right? So that's probably where we wanna do these really fun techniques. And then we wanna try and do these really, really simple things for like the normal users who aren't in crypto yet but we all want them to be in crypto, right? And the idea of not, instead of like having one app that does both of these things with like, you know, this advanced option where people can do these crazy things, you have two different apps so they can both take two very different strategies and experiment with things much more quicker. But yeah, those are my ruminations over the last few months on crypto wallets. So curious as far as like from the Reddit perspective. So most of the projects in the ecosystem don't really have a user base. You guys are fortunate enough to have like millions of users that you are exposing to some of the technologies. What are some of the challenges or issues that you guys have kind of identified as far as like, do you even surface some of this stuff that it is a crypto wallet or do you abstract all of that away or what's the framing that you guys are currently using? So I'm not gonna speak on behalf of Reddit but when I have noticed my, like when I've noticed people use crypto wallets it's like number one, we are surrounded by these password systems everyday so they don't actually get what this recovery phrase thing is. It's also weird because you can't expect them to understand private public key but I think by far my biggest takeaway is usually when we onboard people into new technologies we tell them what are the pros, how beautiful it is, how it's gonna change your life. Whereas when we try to onboard people in crypto we kind of scare them. We throw these really scary jargons like cryptography, encryption, private key, don't give your keys, you will get hacked, you will lose everything, right? So we give them like a one hour essay on like what you need to be careful about. And then we're like, so yeah, if you're kind of, if you're still interested, come into this crazy wall that we all know and love called crypto kind of thing. So maybe like have it a bit more easier, roll in. Thank you everyone. So I think this kind of ties in quite perfectly from the next session that we're gonna change around a little bit. So John will go up around multi-chain tokens, send applications and the DAPUX challenges for wallets as far as like multi-chain, et cetera. So John, take it a little bit. Thank you. Hello, cool. All right, before I start, could we have a quick share of hands of anybody here who's working on wallets or involved in wallets in any way? Wow, wonderful, awesome. Is there anyone here who's working on bridges? Wow, awesome, cool. All right, this talk is for you. So yeah, the multi-chain tokens and UX challenges that we're facing today. I'm gonna talk about the challenges a bit and then hopefully we can have a discussion about a possible solution so we can ease this user pain. Cool, before I dive in, let's think about what users really need in their kind of hierarchy of needs when it comes to transferring money. The most important thing of any money transfer mechanism, not just crypto, traditional money transfer mechanisms, Huwala, anything, is reliability. Reliability is important. You don't want your money to get lost. If we don't get reliability right, nothing really else matters and that includes security. And then once we've got reliability nailed, then there are three more points, all kind of more or less on the same level, which is cost, speed and usability. And users can be happy to trade some of these against each other, pay a bit more for a transaction to go faster, pay a bit less and a transaction goes slower. But yeah, reliability is absolutely the number one important thing. And let's talk about what users don't care about when making a transfer. Nobody cares about the detailed mechanics about how a transfer is made. This has not been a user need. People have not been saying, oh, I really want to understand how Swift works. How what's my banking system doing under the hood? Wow, crypto can really expose the inner workings of Swift transactions. I haven't heard that from anyone. And in today's multi-chain blockchain world, yeah, multi-chain blockchain world, we require users to understand what L2 chains are. Now I'm building status, I think about my mom. My mom's over 70 years old, she's a complete technophobe and I want to make something that she can use and she can tell her friends about and her friends can use. And if I have to be explaining to her what an L2 chain is, I've probably failed at that point. Yeah, we don't, you know, if I want to send money to someone on PayPal, I just use the email address. We don't want to require multiple information artifacts. Yeah, and asking people to perform manual routing actions is absolutely crazy, but I'll dive into this. So, back in the good old days when we were just Ethereum was new and we'd all moved from Bitcoin and it was just Ethereum and single-chain token transfers, everything was easy. This was a token transfer. So, if Bob wanted to send a hundred, if Alice wanted to send a hundred die to Bob, Bob would send Alice his Ethereum address or ENS name. Alice would enter Bob's Ethereum address, select die, enter a hundred, sign the transaction, done. That's really easy. I know the UX was a little bit rough back then when Ethereum first came out, but fundamentally, this is an easy process. This isn't only harder than using PayPal. But, there was a problem. It didn't scale. And if we want to bring crypto and especially for payments and token transfers to the world, we needed something that could do more than 15 transactions per second. So, that's been the last five years of work with L2s and, you know, now, dank-sharding and all the exciting things that are happening, this gives us the scale. But, unfortunately, it's had a side effect that it's broken the token transfer UX, I think pretty comprehensively, unfortunately. So, this is what a simple token transfer looks like in today's multi-chain world. Contrast it to that. It's horrible, but let's walk through it. Okay, so let's say Alice wants to send 100 die to Bob today at the conference. And Alice has 125 die in total. She's got 25 on Ethereum Mainnet, 25 die on Optimism, 25 die on Arbitrum, 25 die on ZK-Sync, and 25 die on Scroll, all on the same Ethereum address. However, Bob's wallet only supports Ethereum Mainnet and Optimism. So, what do they do? So, Alice says to Bob, I would like to send you 100 die. And Bob says to Alice, here is my address, and I use Ethereum and Optimism. So, this is our first fail. Both Alice and Bob need to know what L2 chains are. As I said, I don't want to have to explain this to my mom. So then Alice saves Bob's address, and then Alice also tries to remember that Bob told her that he was only happy to receive funds on Mainnet and Optimism. So, if we think back to the kind of hierarchy of user needs, reliability, we're starting to make something that's very unreliable from a user perspective. Yeah, Alice needs to save multiple information artifacts in multiple places. And yeah, and then later Alice will need to retrieve multiple information artifacts from multiple places. Again, totally error prone. Then, Alice opens one of the many cross-change bridge dApps and uses it to bridge 25 die from Arbitrum to Optimism. So now, Alice not only needs to know what L2 chains are, she needs to know what cross-chain bridge dApps are. Okay, and there's like 10 or 15 of them today. So, and the prices, a lot of these are balanceable, so the prices are changing all the time. And Alice isn't gonna check 15 different bridge dApps to find the best price. And if she does check them, by the time she's checked them, all the prices has changed. So anyway, she opens one and she bridges the 25 die from Arbitrum to Optimism. Now, unfortunately, that bridge dApp didn't support ZK-sync to Optimism, so now she needs to navigate to a second bridge dApp and bridge 25 die from ZK-sync to Optimism. But now, remember, Alice also had 25 die on scroll. Now, if Alice had no way of knowing this, but it would have been cheaper for her to actually use a different bridge dApp and do the scroll to Optimism transfer. But the complexity here has just exploded, so Alice has no way of finding the cheapest price and she just picks one and goes with it because she needs to get it done. Yay, Alice now has 25 die on Ethereum Mainnet and 75 die on Optimism. And so Alice can go back to what she originally wanted to do, which was send 100 die to Bob. So she does one transaction on Ethereum Mainnet to Bob's address of 25 die and another transaction on Optimism to Bob's address of 75 die. And finally, after about 10 minutes of work, Alice has managed to send 100 die to Bob. Now, I'm a blockchain geek, I love this, but even me, and I understand how this works, I don't want to spend 10 or 15 minutes making a single transfer to someone, you know. This is, yeah, we're so far beyond where we need to be for mainstream adoption of multi-chain here. It's, yeah, we've got something we need to fix. Yeah, so let's reflect on it. Yeah, all Alice wanted to do was send 100 die. She had to navigate to and interact with two different bridge dApps. She had to perform two different token send actions. All of while not forgetting which chains Bob was happy to receive funds on, and it's almost certain that Alice overpaid for this. In fact, she, yeah, she definitely overpaid for it. It's error-prone, you know. You've got these different information artifacts you're navigating to these different places, you know, heavily, we're making it hard for the user and we're giving them a lot of scope to make errors. And now how do I sell this to someone? I've been telling them all the wonderful things about crypto, and then they go to make a transfer. They run into this and they were like, oh, I'm never doing that again. I'm going back to PayPal, you know. It's, this will turn people off. And yeah, it's needlessly costly and it requires far too high a level of blockchain knowledge, really. So we can fix it. That's the problem, but while it's, we need to work together to fix this. So I'll jump to the conclusion and then I'll talk why this is the answer. So to unlock fixing all of those problems I've talked about, us wallets, we need to agree on standards so that when one of our users gives another of users then address, it also, that address also signals which chains the other user is happy to receive funds on. Luckily there's already a standard called EIP3770 which pre-pends a chain short name to the beginning of an address based on Liqi's database of chain short names. What we're doing at status is we're extending that so you can pre-pend multiple chain short names. And yeah, to be honest, we're happy to do it in whatever way the consensus of wallet builders comes up with. But I do think the most important thing is we need to agree away for users to signal which chains they're happy to receive funds on for a given address. Now, there are other ways to do this. So like ENS, in fact, ENS already basically supports doing this but not all users have ENS addresses. It costs money to create an ENS address, their privacy implications. So this is a fallback for people who don't have an ENS address. But let's talk through with just this one simple thing. Very simple. We can fix everything I just talked about. So we can drop the UX token same complexity from this back down to this. So with this new, with only adding an address format that signals which chains are users happy to receive funds on, Alice can say to Bob, I'd like you to send you 100 die. Bob will send his address to Alice. Alice can enter Bob's address into her wallet and select the type of token, the number of tokens to send. And Alice can perform a single action to authenticate a bundle of transactions and it's done. In fact, both neither Alice or Bob need to even know what L2 chains are for this to work. So let's unpack how this actually works under the covers. So Alice says to Bob, I'd like you to send you 100 die and Bob sends his address to Alice. Now, with this address format that adds the chains that the owner of the address is happy to receive funds on, Bob's wallet automatically encodes this into Bob's address. And Bob doesn't need to even know that his wallet is doing this potentially. And then when Alice enters Bob's address into her wallet, Alice's wallet reads which chains Bob's wallet can receive funds from the short name chain IDs that are prepended to Bob's Ethereum address. And now Alice's wallet can do all the clever stuff. So the Alice's wallet will look at the type of token Alice wants to send, the number of token Alice wants to send, what the balance of that token is across all the chains that the wallet supports and has for that address. It will look at the chains which Bob is happy to receive funds on that address. It will look at 15 cross-chain bridges in real time and work out exactly which one is cheapest. It will look at gas prices across all the various chains and basically it can entirely automate the routing. This is an impossible problem actually for a human to solve. This is the type of thing we invented computers for. So let's use computers to do this. Just a quick note, we're gonna be short on time but I think some of these discussions are quite important. So maybe we schedule like a follow-up conversation and a working group and kind of continue from here. Yeah, I was hoping we could have a discussion with the wallet folks about this. Okay, and then Alice performs a single action to authenticate what is actually a bundle of transactions. So with one action in the UI, a whole bunch of transactions are signed that perform all the bridging and all the send actions. And then Bob's wallet receives the tokens. And if Bob's wallet has decided to display an aggregate view of balances across chains, Bob just sees his received 100 die. Now this is just optionally abstracting everything away from the user, but obviously a user can like peel back the onion and dig in and manually tweak parameters. When we've done some our talk testing of the designs we have for status that basically implement all of this, it's quite interesting existing blockchain users kind of go, oh, this is scary, what are you doing? But people who've never used blockchain before seem to be completely comfortable with it. So I think once we release our product that does all of these things, yeah, existing blockchain users will take a while to get comfortable with this level of automation, but I think for new user adoption it'll be great. Yeah, and now we have a token transfer experience that delivers usability. We've got back that, you know, it's simple to use. We have delivered to users the cost benefits of L2 chains and without all the complexity of the current multi-chain token send. And yeah, we've eliminated the two separate information artifacts that users needed to hold. We've made it possible for users to send transactions in the cheapest way that's available at that point of time, et cetera. So yeah, these usability costs and reliability benefits are all needed. Yeah, if crypto token transfers are gonna have have a chance of breaking into the mainstream. Yeah, from earlier, I, sorry. Are you thinking of extending this to like, I think as far as I'm going like, not only would you share the currency, but you might not die, you might not have the currency. Is there a way to expand this, or like, you also can say like, I don't think there's a way yet. Potentially, I've tried to keep this proposal as simple as possible, just because this is something we need to bring, come to consensus on as wallet builders, and the simpler it is, the better chance of it happening. But yeah, there's many ideas of ways it could be extended in the future and also with different address formats and things. Resolver services are a really good way to go, but I think we need a base layer address standard as well. Yeah, I think this topic requires a working group for sure, so I'd want to be respectful of all the other speakers' time. So if you guys do have more questions and you wanna kind of coalesce in the corner of this room, we have the room for the whole day, so you're more than welcome to kind of like brainstorm and work on this stuff. And I think, thank you, John, for really like picking up the baton for this. It's an area that no one has really done any work in in some organized manner, so thank you for putting this together. And I really hope that people from the wallets that are here together take this opportunity to really collaborate and move some of this stuff forward. Thank you very much. Cool. So next up we have Rob from the remix team, and he's gonna talk a little bit about developer infrastructure and, sorry, go ahead. Should we have a specific time for wallets to get together to have this discussion? You guys are more than welcome to even have some like quiet conversations right now, but I think after lunch, around 12.30, you guys could probably get started. And then the last like conversation we're probably gonna have is around like 1.10, which is some of the client teams, so. Okay, so let's discuss the 12.30 wallets. All right, thank you very much. Rob, you're up next. Yeah, is ease of use an attack vector? Is remix too popular among beginners? Because beginners, this is a talk about beginners probably, or it's a talk about noob hubs, because remix kind of is a hub for noobs, and noobs are a good place to steal from. And, but this could be remix, it could be, you know, MS-DOS or whatever. And we recently got this email telling people that we gotta shut down remix because it's too easy to scam people there. And we often get emails from people saying, I lost my money from this, I copied some code from a YouTube video and lost my money. Can you get it, help me get it back? So then it's a question of like, are we responsible for that? Did we enable these scammers? What else does he say that's funny? Well, it's not really funny, but I beg you to close your site. So who's responsible? And the choices are users, because they're the idiots that got scammed. Well, that's not really a great thing to say. Remix, I don't really wanna be responsible. Or the scammers, well, they did it. But anyway, it's a discussion that we need to kind of have. And in trying to slow down the process, do we need to add some friction? Now friction can come in a couple different forms, could be warnings. We've added some warnings in remix and as the videos came out of telling people to use this FlashBot stuff in the scam videos, we added some warnings. In the homepage, we added some warnings there. People are still getting scammed, so maybe they're not reading the warnings. And also when you deploy to Mainnet, you see this modal, it's not about saying, don't do this, if you don't know the Solidity Code that you're deploying, it's actually about more like gas fees. But anyway, it slows people down. So it's a little bit harder to deploy to Mainnet than to any testnet. And from the remix docs, it's like warning people not to deploy things they don't know. I mean, they have to dig through the docs to get this, so maybe that's not such a useful warning. But also, it's a warning with a little bit of humor where don't use a get rich quick-screen scheme if you really, really wanna get rich. Maybe humor is a way of getting the warnings to work a little bit. And how many warnings are too many and when's it get depressing? Like this, that's a warning to depress people. And if you come into a remix every day and you see like scam stuff, it's kind of disgusting. Or there's other kinds of warnings like harken stranger, shun the danger. If you plan to stay the same, best come back from whence you came. It's from one of my favorite books, Shrek. Or we could try to use other kind of conceptual models for warnings, like, not don't choke warnings for kids under three, or not appropriate for gullible users. But who defines what a gullible user is, especially if you're trying to make your tool accessible to everybody. And look, like here's another sort of example, like a table saw is a kind of a beautiful object, but it's got some ugly parts that like a finger protector, also like the tube that takes out the sawdust is kind of ugly and the stop button is ugly. But the rest is like very nice tool, but you add some of these like protection features and it makes the tool a little more cumbersome, maybe more useful, maybe more, you feel more safer to use it, but it has some problems. Like if you extend this in a shop class in high school, the teacher doesn't want to take responsibility for anyone getting hurt, so they make it impossible for any of the students to use the tool, because you have to go through all these steps to use the tool. But that's like too much friction and we want people to adopt the tool and to use it and to play with it, but there are inherent risks. So that was the idea of obnoxious but funny warnings like if that was on the homepage of remix, maybe that would temper people, but maybe it would be obnoxious itself. And how do we make it so it's not miserable? And or it could be like more like just news, like have you seen that newspaper wrecked? It's really, I liked reading it. I also liked reading the crime blotter and then all the local newspaper, like what the local crimes are around me. It could be a scroll, like a feed of wrecked in remix. Like that would be like kind of informative and just like showing some of the inherent dangers. And what's another kind of friction that's not a warning? I guess that's a kind of delay, which is a little bit like that modal before we deploy to main net. So I don't really want to use too much friction because the whole idea is that you don't, the guy's point was that in other tools like hard hat or truffle or anything, there's a kind of setup and it gets, it kind of that bar like stops a lot of people or fresh beginners from using the tools. So with a tool that we want everyone to use that's inherently has some, could be dangerous qualities to it that people, you can get scammed there. Anyway, and is it our responsibility to teach all the users some fiscal responsibility or literacy, like in the US, you have to be over 18 to sign a credit card. But I don't know how you put that into remix, but still there's certain kinds of thresholds that are used in other financial systems. But then the credit card companies are sending their credit cards to everybody and people are going to debt as a result who don't understand how to use it. And even those who do understand how to use it go into debt too because I want to buy that thing. And why are people more gullible here than they are in the rest of their life? Is a question that is another way to ask so maybe we can get better warnings or better ways of understanding what's going on. So maybe they're more gullible because they're afraid of missing out. Or maybe they're more gullible just because it's a get rich quick scheme and they're trying to enter the space. Someone shows them how to do it. Sort of, but I mean I'm always afraid to deploy things to main net that I don't understand and even things I do understand. So I'm surprised that people have the courage to follow these schemes, but they do. And this was just a thought that came up after the last talk that maybe users see everything is just so complicated. They want to understand it, they want to play with it. Here's a little bit of code. Okay, I'll try. But they don't understand the complexities of the whole system. And so they're just trying stuff out and they get screwed. So it's somehow to temper their play is the warning here. And then the other idea is maybe the scammers are just very charismatic and seductive. And so, I don't know, can we make warnings that are out seduce the seducers? The problem is, it's hard to do. So yeah, I'm not sure how you out seduce the seducers here. So I think we're back to trying to get some literacy of web three draw users. And anyway, oh, I didn't get that. That's it. Thank you. Just had a quick question. Do you think it is, as it remixes educational tool, it is your responsibility to worry about some of this stuff or you're already doing the service that you are providing, which is provide this playground for individuals to learn and so forth. Do you think that's enough to take that burden on yourself at this point? Well, we put warnings on the homepage. That was kind of new. And it's, yeah, more than that, I don't think so. But yeah, I don't think, because it's a tool and... Next finish. All right, thank you very much. Next up. Good. Good. Thank you. Next up, we have Sunil and the smart token labs. Do you guys have any slides or are you gonna end with? No. All right, perfect. So just for context, smart token labs did a great job of building the ticket ad stations for DEF CON and also the NFTs. So if you guys haven't claimed on optimism, arbitrum or polygon, you can do that right now. And then Mainnet will be hopefully brighter. Yeah, tomorrow. Yeah, so check out the application and then get yourself some DEF CON NFTs. Yes, thank you, Akhil. So hey guys, good day, happy to be here. And today I'm gonna talk about ad stations and the UX challenges that we faced with ad stations. And really, this is an appeal to all of you guys here, the UX designers, right? To really bring this concept live into Web 3 and also help us with the adoption because we are doing it as a public good for everyone. So before beginning, jumping into it, right? Jumping into the UX issues. Let me give you a brief on what ad stations are. So at the heart of it, ad station is a cryptographic proof that is issued by DEF CON saying that the ticket is assigned against your email. So go to your inbox, check out the third email from Deva the Unicorn. You have your ad station email there. So once you click that, you go to this page that decodes this ad station. So ad station, even though it's a cryptographic proof, it's encoded as a URL and this is decoded and stored in your browser's local storage. Now, local storage is considered as a secure enclave where the proof, cryptographic proof, resides in a relatively safe way. Other browsers, other other browsers or other websites cannot access this in a direct fashion. Now, once you have this proof, you can do several interesting things. Now, the first thing is that this is readily usable by smart contracts. So you can initiate transactions on chain. You can do things on chain. So minting NFTs, right? That is something that you can do on chain because you have this cryptographic proof. Now, the beauty of ad station is that it does not have to be limited to on chain transactions. You can use it off chain and also you can use it for in person, in real life examples because you also can expose the signed ticket as a QR code. Now, this is using the standard cryptographic methods and as long as you know DevCon's public key, anyone can literally decode it and verify this is signed by DevCon and it is a ticket at a station. Now, this is the concept and we are doing ticket at a station because we believe that before we go into ticketing and before we jump into NFTs as tickets, you need to have a step before that where it is a ubiquitous technology because Web3 is not just limited to on chain transactions and if you are using something like tickets where for events, you should not restrict it to people who only have wallet or if you don't have wallet, the other solution is to have a custodial wallet but that sort of beats the purpose, right? So we are building this piece of technology by lynching it on one of the most ubiquitous things that's there in internet, which is an email address and that's where we are lynching this proof against and from there, it is up to you. It is up to everyone to use this ad station in a permissionless fashion to do whatever you want and that's the vision, that's the idea that we are trying to do, okay. So that's a vision, that's what we are trying to do. Now, the UX challenges. The first one starts with the naming itself, attestations. Before I explained attestations to you, what did it mean to you? Hey, you have a ticket at a station, what does it mean to you? So we had been having these conversations luckily for us, maybe not luckily for everyone but we had like three years to plan for this because of COVID and we had been bouncing these ideas with people so the first feedback that I would always get is that now at a station, it doesn't mean anything, you should call it something else. Now, what is that something else? Certificates, proofs and what happened over these three years, NFTs happened and until that point of time, nobody knew what NFTs are and now everybody, the kid next door knows what an NFT is. So we thought that, okay, maybe we'll stick with attestation because there is nothing really that captures that idea around having a cryptographic proof that can do all these things, that can be a core component of how WET3 really connects to the real world. So we went with that but still, it's not understood just yet. So the first challenge is naming it and how do we design it? Now, the second one really comes into, again, I would say that it's part of this first challenge is how we visualize it because NFTs have a picture associated with it, it has a standard metadata but at the same time attestation does not have a visualization to it. Now, we went around this challenge and we visualized it as a ticket in the webpage and now there are two different states that you can have. You go to this webpage using a magic link and you would have a ticket there and if not, you would have a blank space saying that you don't have a ticket attestation. Now, even testing it with users in DevCon, if you click your magic link, you can see that visualization is the first one that comes up. However, it is not intuitive to say that this is the ticket and that is the ticket attestation that you have, so we are still working on it. So I think that there's a long way to go in order to identify how we can visualize this thing. And the third thing is that even for attestations, even when you have it loaded into your local storage, it's not complete yet because the proof essentially says that DevCon has issued this ticket against your email and that's in your local storage. Now, in order to do any sort of on-scene interactions, you have to have a second piece there which is what we call as email attestation. Now, email attestation essentially says that your email is now tied against this wallet and there is a trust anchor that signs it off. So once you put both of these things together, that is when you can really initiate blockchain transactions. Now, the state of how the ticket is verified and where you can use it, that is still something that we are trying to figure out. What is the best way to communicate it to a user, especially a user who is completely new to crypto and completely new to web three. So that's the first challenge that we are working around and that's around naming definition and visualization. The second US challenge is that browser support. Okay, now people use all different kinds of browsers. Now, but looking at the devices that the group that are using attestations today, it seems that Chrome and Safari kind of encapsulates or has around 85 percentage of the users using it. However, they are using it on mobile, they're using it on desktop. And in order to get some of the complex logic working around attestations, it is best to be used on a desktop browser. However, when it comes to mobile browsers, there are so many different sounds, there's so many different hoops that the users have to go through. In some browsers, they would have to give the permissions. In the some browsers, they would have to open up a new tab and come back. So it's really, really hard. And especially when you take it to the wallet browsers, the DAB browsers, it doesn't work there at all. So, you know, we have so many people coming in saying that they're using a smart contract wallet, a smart contract browser, and it doesn't work because the modern browsers have the capacity to process and use attestations, but most of these wallet browsers does not. So how a user is navigator to the browser is also a challenge. Now, the way we go around it is that we identify the device and we show a message that this is not a supported browser, use it in another browser, but invariably, people would still try to use it on a DAB browser and would get a bad result. So that's the second floor. And the last one is that in order to procure a email attestation, which I said that's needed to prove the complete unbroken proof from DevCon to issuing the ticket to you and you having this wallet, there is an OTP flow that you need to go through. And we did that for DevConnect. DevConnect was the first instance where we shared this ticket attestation to everyone and asked them to mint tickets. And for this user experience, people had to clip through, go to the page, and then procure this email attestation and from there, the biggest friction point happened because you would have to go to a new tab because of the restrictions around iframes, because of the restrictions about cookies and all the cross-original issues, you had to open up a new tab and then go to your email address, get the OTP, put that into the tab that was opened up, you had to close it and then come back and then you would have a full attestation that's ready to be used for minting. It was really, really bad and getting all the redirect flows implemented was also hard, we are still working on it, but hey, for DevCon we thought that this is a new concept, let's just take it away. Let's just make it so simple that people can still use it. So in the current US that you see right now, you don't see this second portion of procuring email attestations because we wanted to make it as easy as possible for anyone to adopt it. Now, even then, what we are seeing is that people are getting really confused, and especially for the NFT minting process, we have made it so simple that all you have to do is to go to the ticket minting page, connect to your wallet and click mint, and after a while the NFT would come through. Now, we went in a little bit high on really good videos and graphics around how to visualize it. However, on iPhones it doesn't work, no surprises, and people are confused around how this minting process is happening and the interesting question is that have we made it too simple? You don't have to sign a transaction in order to claim your NFT and I'm seeing that people are getting confused. Okay, I did not sign a transaction, I connected my wallet, how is the NFT getting minted for me? So then we are thinking like, have we made it too simple? Is there any other feedback that we should give to the user to make it a bit more clear that this is how it works. So anyway, this is where we are today, really looking at getting the minting processes out, getting to get the feedback from the community, now again, and as I said earlier, this is an open appeal to all the designers over here around thinking about how we could make this open community, sorry, open source technology better, how we can use it to connect it to the real world. So that's what we are really looking for and I ask you to go check out your ticket attestation webpage, try to use it to claim some of the permissionless perks that people are providing, try to mint an NFT and let us know if you have any feedback. Right on the bottom of the page, there we have our telegram group, so pop in, get me, grab me for a discussion, always love that. So with that appeal, I wrap up my speech and thank you so much for listening. Thank you guys. Thank you very much. It kind of sounds like you're damned if you do and damned if you don't. So it's interesting to see that even making it easier confuses users at the end of the day. But I'm really excited to have Kelvin come up and talk about the risk factors around L2 bridges and what kind of systems we can create around that. So thank you very much. Any slides or anything? No, no slides. Where's the smoke, okay. We should chat later because optimism is building an attestation system, like a base layer attestation system. Smart token, yeah, yeah, yeah, yeah, yeah. Highly interested base layer attestation thing. Okay, so I'm gonna need some like basic audience participation to prove a point. So, you know, like how, you know, raise your hand if you know what rollups are, generally, you know what layer twos are. Okay, kind of every, most people. All right, optimistic rollups, right? Raise your hand if you know what an optimistic rollup is. Do you know what a fraud proof is? Okay. And then raise your hand if you think that a fraud proof, that the definition of a working fraud proof is that it's a system that's significantly improves the security model of a rollup, of an optimistic rollup over not having a fraud proof at all. If this is what you believe a fraud proof is. Okay, and then, you know, if I ask people, let's say you had, if I ask people which blockchain, which rollups today have fraud proofs, have working fraud proofs? Can someone, can I point at people? Which, yeah, yeah, yeah, yeah. Tell me, tell me, which optimistic rollups have working fraud proofs? Which rollups have working fraud proofs? Which rollups have working fraud proofs? Okay, interesting. So this is actually more correct than I think the average person would get to. The answer is basically none. The answer is essentially none. This is a very confusing thing that has been going on in the sort of layer two security ecosystem, which is that we have no standardized language around how to communicate security of our systems. I said bridges in the thing, but really I mean L2s in general. There's no language that we can use to say what are the real security properties of these different systems, right? And so people say, oh, we have, maybe the only rollup that has working fraud proofs is Fuel V1 and no one uses it and it only really has half working fraud proofs because it has one implementation if that implementation has bugs, who cares, right? So we're faced right now with two very, very important problems in the rollup space. The first one is that we need to keep rollups and we need to keep layer twos accountable, right? If we don't keep layer twos accountable, if we don't have a standardized set of things that layer twos really need to prove to the world that they do, if we don't do that, then security is gonna come down to marketing, right? You can just convince users that you're more secure by saying these vague things that are not technically incorrect but no one can really prove you otherwise and if you try to argue about it, you're gonna start a big fight on Twitter like I do every other day and then this is what happens, right? Security becomes about marketing instead of real security for users. So this is problem number one, right? How do we come up with a framework of like, this is a more technical problem of specific features that a rollup or a layer two system needs to have to be considered secure. And then the other side of this coin is that that system of keeping things accountable doesn't matter unless we can communicate that to users in an effective way. If you go to, you know, I love L2 beat, but if you go to L2 beat right now and you look at the risks, it's illegible. Doesn't make any sense. This, you know, this can do this. If it's just, it's impossible for the average user to actually look at that information and get anything meaningful out of it. And I think the average user really needs something much simpler, right? Maybe that looks like a score, like a, you know, like one of those JavaScript performance scores. Maybe it literally just comes down to a single number that says, or, you know, a rating out of a hunt, whatever it is, right? Something that just in the simplest possible terms communicates it to a user. What are the actual security properties of the thing you're about to interact with? Because if we don't, people are gonna get burnt and then they're gonna go and then they're gonna complain about layer twos and they're gonna never use another layer two again because they thought it was secure because the marketing said it was secure. And really, and then the people who were marketing it were saying, well, you know, we never actually said it was secure. We actually said it had a fraud proof, but then the multi-sig is what got hacked and you know, we just didn't, it's just like all this, it's really confusing. It doesn't make any sense. If we don't have a language to communicate this to users, they're gonna have no clue what the systems that they're using are actually capable of. They're gonna have no clue how safe their money actually is. And if we don't fix this problem, people are going to get hurt and then they're not gonna wanna use Ethereum and it's gonna hold back this roll-up-centric future for a long time. So I guess later on when we start doing workshop stuff, the goal is basically sort of half and half, right? It's trying to come up with what are the key features of these L2 systems that we want to hold L2s accountable because we wanna make the incentive to have real security, not pretend security, not just marketing. We want it to be, we want concrete things that if you're not doing these things, the reality is you're not secure no matter how much you say you're secure. So we want a list of concrete things. And then on the other side, we want a way to easily communicate that to users. I don't know what that looks like. I don't know where the right place is to put that information. I think when we talk about this, a big concern is, if we put this on the applications, by the time they're on the applications, they've already taken that risk. If we put it on the bridge, by the time they're on the bridge, maybe they're thinking about taking the risk, but a lot of the people who come to our bridge, the user research that we've done, by the time they're on the bridge DAP, they've already basically committed to bridging. So it's unclear where to put this information so that users know ahead of time before they even get into the ecosystem. Because once they're in, oh, now I found out that it's not as secure as I want, but what I have to withdraw now and spend more money to get out of the system is just a headache. So the goal is how do we make sure that users really know the security properties of the systems that we're using? And where should that information live and what should that information look like? So that's what we're interested at Optimism, basically. How do we, we just want to make sure that our users know what they're getting into before they're getting into it. So I had a few questions. One was like, one of the issues is similar when you have staking providers, is like, who does that ranking? And then is that ranking then also somewhat compromised by the organization that's doing it or who's financing them, et cetera? Like how do you create some sort of mechanism around what is secure, what is insecure? Right, so the back end, I guess, of this is the, is the, sorry, this station is on my mind, is the risk framework, right? So it's the low level things. And I think fundamentally, this is just gonna have to be an open thing. I mean, Optimism is very interested in this for ourselves. We want to keep ourselves accountable. And so where we've been working on this internally, just as like a checklist of things that we need to do to get to real security. And so I think an easy answer is we just put it out there, start some sort of organization, which is like a, you know, some sort of collective group of the bigger L2s and say, these are the security properties that we're pitching. If you have any ideas about security properties that don't, you know, that we're missing or a few things that you think these don't matter, like explain to us why. Why does this matter? Why does this not matter? And like, I think at the end of the day, it just has to be like a consortium of people coming up with a standard that we can all measure ourselves against. Also another question came in from Grame from Argent. He wasn't able to attend, but he was wondering how could he understand the trust differences between the roll of flavors. So his understanding was that ZK roll-ups can trust each other, but optimistic roll-ups can't trust them. Is that true or is this an understanding flawed? I don't know what the word trust means in this case. Okay. Well, okay. You know, here's another thing, right? It's like you have so many different axes along which you want to measure these things. And like a critical one is bug risk, right? Like you can have a ZK roll-up. The whole point of the proof system is to prove the correct execution of the client. If the client has a bug in it, the proof system might be perfect, but all of a sudden the proof just proves that the bug ran correctly and now you can lose all your money. And so like, I think there's so many different angles that people have been, you know, it's so easy to get confused about, okay, is a ZK roll-up more secure than an optimistic roll-up? Does it matter if I only have one client or if I only have one proof? Do I need three different proofs? But like, the users don't care about any of this, right? How do you communicate that to users saying, you know, a system with one proof is less secure than a system with two proofs? But if the system with two proofs was built, has never been tested on chain before, maybe it's less secure. And so it's, you know, at the end of the day, there's too much information to communicate to users. How do you boil that down into, this chain is like okay, secure, this chain is more secure than this other chain, right? Makes sense. And then beyond that, I think what John was talking about is the user experience between bouncing and utilizing these chains in some time because it's a manner is a big challenge too. So I think maybe that group could get together and discuss some of these things together in some capacity. And I think the smart token labs have quite a bit of experience from alcohol, et cetera as well. So that could be a good conglomerate of teams to get together. So thank you very much. And looking forward to those conversations later on in the evening. So the next topics we're kind of jumping to is I invited most of the client teams to come participate. I think that's one of the biggest issues in the ecosystem right now is how do we get more people participating in the network itself? And what are the UX issues and areas that we need to kind of focus on? So we'll kind of skip over the attendee fishbowl and just go straight to Cindy, who's from ChainSafe Labs and working on Loadstar and other projects there. So thank you very much, Cindy. Good. You just got to turn it on, but you can just hit this one. Hey guys, this talk isn't going to be about Loadstar. It's not going to be about client adoption. It's actually end user talk. So sorry for the disappointment, but I think it's refreshing. Cool, so my name is Cindy. I work at ChainSafe Systems. We are a blockchain research and development firm. We also make products that help end users out using web three in there every day. Cool, so in 2020 and 2021, I worked on a project called ChainSafe Files. It's an encrypted file management platform, kind of like CripPad or Skiff. The only difference is that we put the content directly on the IPFS and Filecoin network. So I want to share some stories about what it was like to build on privacy for end users. There's three challenges. I mean, three challenges, three trade-offs, and some solutions and tips that I can give you guys. The first one is designing for different appetites for privacy in a single app is really hard. We came up with features like exporting all your data off via CLI, pinning on IPFS via CLI, and also a social recovery flow using web three auth. What seemed like a win for some of our users actually ended up creating a lot of friction for our more novice users. And the trade-off here was figuring out the information hierarchy, like what do you show first, what should you hide, what should you abstract away? So we use the framework of asking ourselves questions about transparency to deal with this. Like if we add this feature, would it diminish our users' trust? If we didn't show this modal, or if we didn't show this hash, would it increase users' trust? And so we argued endlessly about where do we put the CID for the files that are stored on the Filecoin IPFS network? Do we show that like right in the file browser do we abstract it away? We decided to show it like right then and there. It looked pretty ugly, but it was a way to differentiate our product early on that we weren't trying to recreate like an off-brand Google Drive. So those are some trade-offs that we phased when we were designing for different appetites for privacy. Some solutions that I mean, some recommendations and tips I would give you guys if you're dealing with a similar issue is craft forgiving user flows, meaning that you should expect error. That's a clear choice. You should also error, or in my opinion, you should error on the side of showing too much information, but show skip buttons and make them dismissible for the folks that are used to it. After a while of testing information architecture, you're going to see what type of privacy or what type of user is going to emerge out of your app and is best for your specific use case. A second challenge that we phased when we were designing privacy and convenience and transparency is that profiling users and usage is challenging when we're collecting, when data collection is limited and at times intentionally vague. So in files, you could log in with Google, GitHub, or an Ethereum wallet. But in our user surveys, we would only make it mandatory to ask for one of those. So we weren't sure how many duplicate responses to the user feedback we were getting. And on top of that, it was sort of difficult to link returning users from new users. Another project that this challenge arose was when we were building a dashboard to compare different Ethereum consensus clients. So we wanted to inform the community, but at the same time, we wanted to protect the privacy of node operators. We ended up limiting geographic information so you couldn't zoom in on the heat map all the way, stuff like that. And then emphasizing client stats and upgrade information. That was a little easier than dealing with end users, but it was because the community is a lot more responsive and easy to reach. To summarize this point, sometimes you do things that are UX sacrilege. So you're working off of a huge assumption. You're filling in a lot of the blanks yourself. Or your sample sizes are just really small. So the ways that we tried to deal with this if our sample size is very small, try to stay credible by keeping it one-on-one. We would talk to a non-sync on Discord and Telegram, not try too hard to focus on like, you know, sometimes people try really hard to get you in a Calendly meeting. We didn't push for that as much and that created great results for us. All right, so the last challenge that we faced when building along this privacy and convenience axis is a bit more high level, but albeit I think it's as important. You know, raise your hand if you heard that famous quip where web one is like read-only, web two is read-write, and web three is read-write-own. Yeah, like most of you, right? So if you're building for the ownership part of read-write-own, I realize that there's this proliferation of choice and affordances that come when you build end-user apps like that. So in other words, like having pseudo-rights, ownership, and data sovereignty, it augments the complexity of the interactions within the interfaces between the app and the users. For example, comparing Twitter and NFT marketplace. When I go on Twitter, I upload my content and then I wait for the dopamine. But on an NFT marketplace, I do that, but I also list tokens, I trade, I de-list and migrate, ideally, or get royalties. So that's just an example of how web two and web three user patterns are starkly different, and we probably shouldn't be trying to force them to be the same. My recommendation for working on challenges like this when there's greater complexity that is created from this ownership module is a little counter-intuitive. As a designer and a front-end dev, like when you experience a lot of information, you want to clean it up and hide it away somewhere, but I'd actually advise resisting the oversimplification of the UI. It comes from this assumption that the best user experience is the least awareness of software as possible. And this is a paradigm that I think that we need to leave behind, at least a little bit, because simplest solutions aren't necessarily the most effective or the most efficient when ownership and sovereignty are highly valued by these end users. If users own content, funds, and identity, it requires knowing how important that is and how to steward that, which a lot of people are working on. Hence, there's also a proliferation of insight-generating apps for users today. There's this really great essay by Lauren McCarthy. She's the creator of P5.js, which I love, and she says that software is used by many, but intelligible by the very few. And one great thing from building in a community like this is that there are network effects for blockchain technological literacy. So as builders and designers, I don't think this means that we should expect that people using these platforms should be as technical as us who are building them. But we also don't wanna oversimplify to the point of disempowerment and learned helplessness. What does it look like in practice? Make don't trust verify a user pattern and not just a name for a cultural movement. So that means showing people valuable information at helpful moments, having humans answer questions, having great docs, let your UIs have dense information, but make them collapsible and within reach, and keep those routes open for people to verify information. So yeah, those are some stories that I learned from building on decentralized apps for the past few years. Thank you very much. Quick question. Are there certain information that you saw from files, et cetera, as far as verifying, not trusting that you think would be good patterns to follow in the rest of the ecosystem as well? Yeah, so we showed things like the content identifier so people can view the content on an IPFS gateway, which looked a bit scary for end users, they didn't like it. So what we ended up doing was putting it in an info menu item when you're looking at your files, like imagine a Google Drive, there is an info section, and then there's a technical info section where we tease the CID, but we also show the miners so you can see it on an Explorer. Do the users understand the value of that or is just like, okay, it's there and I'm good enough at this point? Our heat map showed that they would click on the CID, but they didn't click much on the miner and the other stats. Thank you very much. Thank you. Next up, we have Yasek from Nimbus who's gonna be talking about the UX that Light Clients Enable and the opportunities there. So thank you very much. Hello everyone. Yasek from status has introduced Light Clients. I'd like to start with two little stories. It's actually gonna tie into what Cindy was just talking about, which is about how users like to use systems, but they don't necessarily understand the fine nuances of what they're doing when they're using them. So I was learning about networking a few years back, maybe 20 years back. And these were really fun times because we used to really trust the internet. Like we were very optimistic about things. So we didn't really think about encryption or any of those things. However, Tor was coming up as an anonymity solution, right? And if you were running a Tor node, you could actually look at all the traffic going through. And most funny, you could look at all people's passwords when they were logging into their bank accounts, their email accounts or whatever they were doing because it was all unencrypted. We're in this naive world where there were no bad guys. The other story I'd like to tell is one of status where maybe four or five years ago we started trying to build an application that would connect to Ethereum and download data. And we used to run the geth node inside the status app. This is nuts. Like it consumed way too much bandwidth. Nobody wanted to use it. Their batteries were burning up and so on. So we started a little bit of infrastructure research and we've been going on at it for a few years. And there's really exciting news on this front now after the merge because we've finally gotten what's called an in-client, light-client protocol. So how that works, I'm gonna briefly say that it works by selecting a committee that attests to the fact that Particular Block is the most recent history that everybody's agreed to. And they include that information in the previous block, right? So with fairly high certainty, post-merge now that we have validators that attest to blocks, we have a protocol that with a single signature per 12 seconds, which is about less than your Gmail app uses, we can actually verify the data that is coming from all third-party providers, like Infura, like Alchemy, like whoever else might be providing data to your app. So the way that works, I'm not gonna talk about it. There's a great presentation by Eaton, the guy that implemented it, actually, both in the Ethereum specification and then in our client. But the cool part is that with these signatures, we actually have the possibility to verify that the data that we're getting from providers is the data that everybody agreed on. What does that mean? Well, it means that suddenly, the way we used to operate in Web 2 was that we would trust our provider of data, we would trust Google, we would trust our bank, we would trust whoever was there. And why did we do that? Because we had a legal recourse to suit them if they did something wrong. The systems that we're building today, we're kind of interacting with Rando's on the internet. And Rando's out on the internet, they might be like me like 20 years ago that was just looking at the passwords and being very amused by the naivety of people using those systems and thinking that they were anonymous and safe, as Torres was presented at the time. What that does is that it creates a model in which data providers are no longer have to be trusted. What does that protect against? Well, many things. In FURA, for example, most of us know it, Alchemy most of us know it, Pocket Network, most of us know it. What we do is that we ask them for our balance, we ask them for the state of our NFT, whatever. We ask them to create a transaction on our behalf and submit it to the network. And we hope that it's all kind of fine. Submitting a transaction, we sign it so they cannot manipulate the transaction but they can manipulate the inputs to the transaction. They can give us a false balance. They can say that the NFT belongs to you when in fact somebody stole it on the way to you, the one that sold it to you maybe, right? So with the lifetime protocol, we can suddenly create user interfaces that can verify all the data that we get from all our providers. And then we no longer have to trust them to provide correct data. And what does this enable? Well, on mobile phones, wallets that verify data suddenly become feasible. They use less bandwidth than anything else out there that you have ticking all the time. We can create smart contracts which understand that the consensus of another chain has landed at a particular block and therefore bridges become more secure. We can create a zero knowledge proof of all the past history of this consensus and with a single small verification, similar to Mina, we can again verify that everybody's agreed on a particular state. So if I was doing my experiment today on Tor, I would notice that everybody uses HTTPS, right? Everybody uses SL, everybody encrypts the traffic. There are no more email providers that allow you to log in without a secure connection. And I would argue that six months from now, there will not be a single wallet out there that does not verify the stuff that you're getting from your data provider using the LightGlant protocol. And that's the world that I want to see. So if anybody's working on wallets, if anybody's working on any kind of tooling, join me, I'll be happy to explain the protocol. We have libraries available that you can just include in your application that allow you to add this functionality like at minimum cost. If you're a user, we have a proxy so that you can run this little mini node in your home setup. It uses almost no bandwidth and what it does, it sits between, say MetaMask and your Infura. And when your MetaMask makes requests, it verifies all that information that is getting from Infura using the LightGlant protocol and it will block any data that is not valid according to the consensus that everybody's agreed upon. So how does that change the role of Infura and that ecosystem at that point? So are you still relying on Infura? So what we're relying on Infura for is that they become like a static data provider. So they still analyze the chain, they run the node for you and so on. And when we ask for data, they give you the balance, for example, and they give you a cryptographic proof of how they came to that conclusion that your balance is five eth or whatever it is, right? And with that proof, we can compare it to the latest state as we know it from the consensus protocol. And then if the proof checks out, we know that they are giving us correct data. So they're still very valuable in that model, right? They're still very valuable because this is basically the lowest bandwidth way to get data available today. I'm saying today because we're also working on another project where the data itself, where you can swap out Infura for a decentralized range of servers that will be providing those data points. But even in that solution, you don't know if the data you would be getting from the network would be true. So you would use the same kind of validation. Are you talking about portal network at this point? That's the portal network at this point. That's the portal network and we have a client called Fluffy for it. So this is like one step on a long journey. The most important one arguably, it's taken us four years to get here. There are many steps beyond this, but really run the proxy if you're using a balance, if you're using MetaMask or anything, if you're developing a wallet or any kind of service that interacts with Ethereum and uses like a third-party data provider, check out our light client work. So quick question, from a user perspective, how do you convince people that they now would benefit from running some of these services? And as we know from past experiences that once people get into a habit of relying on third-party intermediaries, it's really hard to convince them to now take on more responsibility. So what do you think as a community, like what kind of education or work we need to do for people to realize like this would be a good value add for everyone to be running on their own devices? I think the main message here is that they don't have to be running everything on their own devices. Like thanks to the light client upgrade that is part of the Ethereum merge protocol, we can provide something that is lightweight enough that the third-party providers become just data stores like a hard drive basically. And the journey is the same journey that we did from unencrypted internet to encrypted internet. Right now everybody expects a little icon that their connection is safe and the certificate is valid and blah, blah, blah. I imagine that a wallet would want to show that it's validated the data coming from the provider using a light client solution like this in a similar fashion. Okay, perfect, right? It's exactly the same journey. And we have to educate people that don't do this, it's unsafe, like a hacker might hack Infura and even if Infura is good, the data coming out of it is bad, right? All right, thank you very much. So next up we have James from Prilabs down like this Arbitrum. Congratulations on the acquisition to talk about improving Ethereum staking and adoption. Any slides or anything? Or yeah, I can just stand up. You can sit down if you want as well, no issues. All right, hello everyone. I'm James. I work on the Prism consensus client. And recently I've completed my first year and then through this first year, what an amazing year. I was able to participate in the Altair hard fork, the merge, and also the company being acquired. So now joining off chain. So during the process of these hard forks, I was able to interact with a lot of our users and notice there's some areas that could increase adoption in Ethereum staking. So today I'm just gonna go over a couple of these areas that I have some opinion on and then shed some light in the staking infrastructure in general. So the first opportunity I see is around service management. And as we're growing in the staking community, there's all these other tools that you can add in. But connecting these and enabling these for our users might not necessarily be in the protocol specs. So designing the consensus clients, we follow these protocol specs defined by the Ethereum foundation. However, there's plenty of other things that aren't defined in those specs and client teams have a lot of leeway. So one example is when I first started, Prism was the only client with a UI. All other clients did not have UI capabilities and they did not have APIs to allow management of their keys. So myself and a handful of consensus devs got together and created the Key Manager API standard, which is not actually part of the protocol spec. And this enabled all client teams now to have their own UI because now there's a REST API in front of the Validator client. So that's just one example, but as we continue to grow, now there's plugins like Web3Signer for remote signing. If you don't want to include your Validator keys in the Validator client itself, then you can use this third-party tool called Web3Signer. And Prism supports that, I believe most of the client teams support that, but that again is not part of the protocol spec. As we continue to grow, I'm sure all of you have heard of MEV and all the different tools that are being created over there. As we migrate to full PBS, we do need community support, like what do we need to add as integration into the system. And also with withdrawals coming. So there's a lot of different areas where plugins can be attached to these clients and it's sort of not part of the protocol spec, so we need some feedback from the community. So with more and more services being built and pieced together, there are a lot of opportunities to align the user experience, even if it's not part of the protocol spec. The next topic that I wanted to talk about was improving the user journey for entering test nets. I think that's an area that we could really improve as right now, you need to get your gorelly ETH through Discord or social media or really sketchy sites or just connections with certain people. This makes institutional adoption very difficult. And at a company like JP Morgan or these banks or even Google, it's difficult to connect to these other social media apps in order to get the test net ETH that you need in order to participate. And so typically at your traditional tech company, you would go through your local, develop the software on your local, move to a dev space, move to a test space, and then into production. If you're working on staking or developing in the DeFi space and you want to run your own node, it should work in a similar fashion. I've heard people try to do a deposit contract and then just directly go into main net without any experience in staking at all. So it's really important to try to increase this adoption. So all in all, this is a call to action to developers. There are a lot of opportunities in this ecosystem without knowing premier knowledge on cryptography for choice, consensus knowledge, how the EVM works. So there's a lot of opportunities just in how these microservices or services interact with each other and helping the community create better standards and result in a better user experience. Thank you very much. Thank you. So I had a few questions touching on some of the points that you kind of raised. The first one was around the standardization work that you guys did for the key manager stuff. What were the challenges there? How did you guys actually get community? Because I know on the back end, we tried to create a lot of Discord channels, have those conversations that eventually you and Michael were able to kind of push it through. What was that journey like and how do you see that repeating in the future in different areas that you kind of highlighted now? Yeah, that was pretty amazing. We were actually trying to push that through the ETH Staker community. So consensus teams, all core devs are on a very tight schedule. They want to get the protocols done. So these things usually fall on the wayside and actually DAP-Lion, so I'm calling out loadstart team, was the first one to take initiative, create that PR and ask the community whether or not we could include this. So he actually went to the Beacon API specs and created a PR, took that initiative and got the ball rolling. Now the next step is people reviewed it, they had comments on it, but that wasn't enough to get all the client teams aligned together. So that took some more social engineering, a lot of trying to reach out to one person, connect to the next person and get everyone together. So a lot of help from the ETH Staker community to try to push this through. We needed a UI for other clients, and that really enabled tools like DAP-Node or Avado, or these staking tools for people that are not as technical and really drove that forward. Yeah, so I think the other question is now that the merges happened and maybe there's some more capacity available in these teams, is this something that the client teams, maybe other client teams can weigh in? Is some of this becoming more of a priority or resource allocation being put towards this or not really? I'll take that as a no. So how do we move those conversations forward? I'd be curious to hear from all the client teams and see what does that look like? Who's responsible, how do you create a community and a forcing factor for some of this thinking to happen as a whole? That's why I wanted to do this talk in the first place, to try to get a call to action to developers. As we move from the merge to the surge now, client teams will be even more focused on protocol level decisions. So the next three things, censorship resistance with MEV, scalability with EIP 4844 and withdrawals. So client teams are pretty much very focused on that and the continuing development around fork choice and protocol level design. So it's very important that we get more feedback, more developer interest in the user experience. We don't want to leave people behind, we want to increase adoption. So that kind of like perfectly segues into landscape talk. From that note, there are challenges they're facing as far as like integrating some of these clients in the first place into their services and the front end that they've built out which is amazing as far as like just running these services in a pre-configured device or application as it stands. You can just connect here if you wish and this is for the clicker. We were also supposed to have Adrian from Lighthouse.com but he might have to join because he has a session right now actually happening. So perhaps we'll do a quick lightning talk from him after lunch if he is able to join. But aside from that, I'd be curious to get some of the individuals working on the front end to have a discussion on like what we can do to move some of this stuff forward as well. All right, all right, thank you, James. I think this was a great, great summary of what we're doing as community, from the community side as integrators that we need to make the UX easier for the users, right? So that note, very briefly, we help people run nodes. We help people run validators or nodes or anything you want by providing this free open source software that you can install in any machine and it'll turn it into, well, it's actually Linux and it's Docker-based and any machine that you install this in it'll give you this set of tools to manage nodes in a super easy way, right? So you'd go here and you decide that you wanna install like a pocket network node or a Nimbus or whatever it is. And that note has been a really good solution lately for those who want to validate and maybe they don't have the technical knowledge, right? So we are really in touch with the users that do not want to spend hours on DevOps. We were just doing the calculations the other day and Geth rolled out 16 releases in the last 12 months. Prism was 20 releases in the last 12 months. That's something that if you're a validator, if you're a solestaker and you wanna test those, as James was saying, we need, please do test those in testnet before, you would need a couple of hours for each of these to test each of these new releases, right? That's 72 hours a year on testing and deploying the new versions of the nodes. So there's a lot that the DAB node does in the integration side that it just works. It has auto updates. People do not have to worry about this at all because the way it works, all these packages are actually in an Ethereum smart contract, which is the repository. That's the Aragon package manager. And the content, the Docker images are distributed through IPFS. So IPFS, the DAB node has an IPFS node inside and it also has an Ethereum node inside. So you're actually never calling home. You're using nodes in your machine to take the, and to get the content, to download the, to know what the content is available and to get this content into your computer. Okay, so back to the story that James was saying, we're trying to make it super easy for people to stake with no technical knowledge, right? And one of the problems that we had was that there was only one client with a UI, which was Prism. And we, that lion at that moment was already working at load, sorry, but he was already, he was also working at DAB node. So actually I was very involved on kicking off this initiative of the key manager API because it's something that we needed for DAB node. We needed the key manager API so people, so we could have client diversity in DAB node. Right now, do we have, I'm not gonna go on this story any longer, we'll see the UI later. There's still a lot of problems like DAB node itself is not free of its own problems. And when you try to validate with DAB node, you'll find that you need a lot of previous knowledge, you need a lot of reading up, you need to know that you don't, what is a node? A node has changed, the definition of a node has changed. Now you need an execution layer and you need a consensus layer and you need both. And then you need a validator client. And then if you wanna manage the keys, you need, which is what we do in DAB node, and you need this UI to input and output the keys, you will need a UI for this. And how do you find those packages in here? There's Tiku, there's Lighthouse Noses, but there's also Lighthouse, there's Nimbus, there's Nimbus, there's Tiku Pratter, there's SSV Pratter, it's chaotic. It's like, okay, we need to guide the users better. So that's what we are working on right now, that's what we did, which is, so it looks like shit, okay, because it is not beautiful, but it's very fucking functional. And it guides the user to do whatever it needs to do. It says, hey, choose an execution client. And you can choose on this three. Hey, choose a consensus client. Hey, and it'll choose one of these four. It's funny because DAB Lion has been working on DAB node for most of his career. And guess which consensus client is not in DAB node? Fucking loadstar. Lion, get your shit together, come on. Then the weather signer, which maintains the keys. And this is a really key UX improvement, that the fact that the keys are in the web three signer, because now you can, thanks to the Checkpoint Sync, by the way, really fucking good work, client implementers. Thanks to the Checkpoint Sync, you can just literally go here, change this, apply changes. Now we have switched the configuration to Tiku. Now it's a little download to Tiku. And thanks to the Checkpoint Sync, you'll be validating in like two minutes. You'll be online and validating in two minutes. There's literally no switch. Another thing that we need to manage in the background for the user is the fact that you cannot have two consensus clients pointing at the same execution client. So we take care of turning it off. We take care of all these little edge cases and little things to make it super simple for the user. So now let's talk about the keys. Why are the keys in the web three signer? The keys are in the web three signer. So when I change this, I don't need to take the keys out of one package, remove them, change, download the other, put the keys in there, and then start validating. This is a recipe for disaster. If you forget that your keys were in the other client, double signing slash, not a good idea. So the keys are in the web three signer. And what happened with the Key Manager API what's being used for is this. This is, so there's not a lot of frontend people in that note, okay? You can tell. And definitely no designers. So please give us a hand. This is open source. We are officially maintaining these keys that everybody can use. But these Key Manager UI, which uses the Key Manager API, is free and open source and anybody can use it. Now, if you guys want to make it more beautiful, please do so. We're trying to integrate some stuff right now like that each validator key will have its own withdrawal, so few recipients, sorry, not withdrawal, few recipients. Because that's something that as we have seen here, you choose at, so in that note, because of the way it works and because we try to not to make it super complicated, when you configure it, it uses past the value of the few recipient address and that's all of the keys go there. So this UI, where you have the list of keys, sorry, I could have put some fake keys here. Well, it doesn't have this. It doesn't have this and why doesn't it have this? Because there's no standard. There's no standard for changing this. So right now, if we want to do it, we need to create a specific implementation for each client using whatever system that each client has on how to change the validator, so the few recipient for each validator key. So this is one of the things that we are going to be pushing just like we push for the key manager API. This is one of the things that we're going to be pushing. Standardization guys, standardization is key. I understand that the consensus teams are super busy and the work that you're doing is fucking incredible and wasting time on coordination might seem like, yeah, like a waste of time, but then it will have to be done later. Somebody, you're just kicking the bucket, right? Like it will have to be done by somebody later on and there's literally nobody better than you to decide on a simple way of doing it. So we're going to be pushing for this in the next round. We're gonna do something similar than what we did with the key manager UI, in the key manager API. But then there's gonna be something else. There's gonna be another thing that needs to be integrated. There's gonna be another thing that makes life easier for the users. So I would hope that initiatives like this get us all together in a room and say like, hey, what problems are we having? And one side, what problem are we having? Super downstream on the integration side. What problems are we having up there on the coordination? What problems are we having at a design level process? And yeah, I just want to thank Akil for bringing us all together. Thank you very much. Yeah, so I think we've seen there's two opportunities for clear working groups to happen. One is the wallet one. One is probably client team integration and standardization. Adrian did have some ideas around standardization that he wanted to push as well. So I think maybe even if you can't get it in person, maybe I'll create a back channel communication to kickstart some of this stuff. But beyond that, I think some people might want to have other conversations that are in the room that we might have not given too much priority towards. So before we just go to off to lunch, if anyone just wants to come up, ask any questions or wants to just do like an open mic session about some other issues that they are very passionate about, would love to kind of give you guys the opportunity because we were a bit behind schedule, so I didn't give opportunities for people to ask questions from all the different speakers. So if you do have any questions or any other things that you want to add, this would be a great time. And then aside from that, I think we can go for lunch and then kind of break on into groups. And maybe if Adrian comes back, we give him another 10 minutes to propose his standardization stuff and kind of take it from there. And if you guys have any other friends or other individuals that you think might be useful for those working groups, please do invite them after lunch and then we can kind of kick off from there. So thank you very much. I think you had some questions? Yeah. Just take a mic, yeah, that'd be amazing. So I think lunch starts at 12.30, we can come back in here at one, half an hour should be sufficient enough and kind of get started from then. Thanks. Thank you for hosting and thank you for speaking, everybody. I had a question about improving the accessibility of staking very much along the lines of what James has been talking about. You know, I think I could convince my mom to try and create a wallet, like a metamask wallet on Chrome. I don't think I could convince her to wrap her head around the complexities of, you know, what is a node? How do you define a node? How do you think about a node? She's not a very technical person. She's capable of downloading an app and maybe creating a wallet. What I've been thinking about a lot is how close are we to getting to a point where somebody completely non-technical could in two steps, you know, download an app, deposit ETH, and run a node, and then maybe under the hood, whatever solution that is, could just roll a dice, choose a client, choose ELCL, automatically maybe create, or yeah, I don't know, I guess the main question is how do we get to a point where it's as simple as one, two. How far away are we from that? I know standardization is a big part of that, but maybe even without standardization it would be possible and some assumptions could be made that could be tested and then that testing could influence standardization sort of in a backwards way. So what are your thoughts on that? I'm addressing it to you because you were talking about a lot of this stuff, but anyone, feel free to. I would like to ask you the question, why would you think your mother or someone that is not so technical would wanna invest 32 either in the first place and stake or should be staking? So are you thinking about it from a participation perspective and then is running a node more important than staking? So I wanted to kind of create that distinction. Yeah, I'm kind of bundling the run a node stake eth into a single bucket and thinking about how do we pull that activity along the technology adoption curve by making it so simple that you don't need to understand technical terminology, but you do want to participate in decentralizing digital economy. Like you philosophically or maybe your values are aligned but your knowledge and your technical familiarity is the limiting factor. So assuming that you have values that align but not technical familiarity, I feel like there's a lot of people in that group that are being prevented from participating. Makes sense. Yeah, so I just wanna clarify that's like usually a misconception within the ecosystem as well is like you can run a node without having to validate or it's a mainstake. So like that education is missing in the ecosystem quite a bit as well. And then like the complications around like staking and what is actually required for you to understand that like distributed validated technologies are coming out where your mother can pool with all their friends, et cetera. So like that might make it easier but from the front end perspective you can probably touch a little bit on that. And then I have a bit of context from what's being done on status, what's been done on lighthouse as far as like getting this stuff moving forward a little bit. So I can probably touch a little bit on that as well. Love to hear your thoughts. Thank you. So yeah, there's, so this is a little bit what we're trying to do. And we're definitely not there but what do you have seen now where you have to choose all of this? Well, let me start from the beginning. Like the, we can take an approach of like free open source software but that's not the approach that's gonna get us this mass adoption, right? Because then you need a Linux installation. You need like this, all this sort of stuff that it doesn't, that will not make it happen. Just people will not do that. So what we're doing as well, we're selling hardware. We're selling hardware that comes pre-installed and you just plug and play, you plug it to your router, you turn it on, it emits a hotspot and then you can connect and configure your VPN and then you can go, you can enter and access this machine which is effectively a server that lives in your kitchen but we don't call it a server because that's scary. We call it your Dapp Note Home. And yeah, you can access from wherever you want. So this machine, first of all, the first hurdle is that that's a different mentality. That's not a free open source mentality of like, I'm gonna hack a node. I'm like, that changes. It's like, I buy this machine and I expect to have a return from it. And here's, this links very well with what you were saying about you can run a node without validation but that's not gonna happen. That's not gonna happen. We've been in Dapp Note since 2017. We've been trying to get people to run nodes and we only got big, download only got users, non-technical users when there was incentives to run this node, when there was money on the line, when there was a way of doing this. So now we have this first step where people have to buy something because they don't have the experience of like how to install Linux, whatever. People have to buy something. And then there's the incentives part and if they buy something, what's their expectation? There's an expectation of return. And then it just becomes easy on the implementation because what you have seen what I was showing on Dapp Note, this column like thing or you got to choose your execution client. You got to choose your consensus client. This we could have like an API that says that automatically chooses whatever combination is the least used. So this is great for client diversity and you just press a button and everything tells you like, okay, this is put here, your frequency pin address and everything else is broom deployed for you. And that would be like, that would be extremely simple, right? So in Dapp Note, we're thinking about doing that and then having the advanced mode is like you choose yourself. But then there's like a set of constraints of like what do we want to do in Dapp Note as well? Maybe somebody else like a competitor, it's less focused on decentralization because we don't want to be calling any APIs. Like what do you have on your machine is yours and you never call home. There's no telemetry. So that's a different issue, but whatever. Yeah, it would be very simple to do something like that and it could be done tomorrow, yeah. So I think the other perspective is that the experiences are quite distinct and separated as far as like the utility of the network and the services people are using. So the way that like status is looking at it is like, if you're running your nodes for messaging or all these other services, now you have a utility or a factor for not bunching in your Ethereum clients there as well. So now you have this like whole vision or like this vendor into web three that we were kind of like promised with the miss browser where you have the ability to control all your web three services. And then it just makes a little bit more sense for users to say, okay, now I want to run without even having like monetary incentives to now start running services that they see like, okay, I'm running my IPFS node. Maybe I start running an Ethereum node as well because I'll be secure and I'll have more utility around the services I'm using. And now if I'm already bought into this whole like web three vision, it just becomes a little bit easier and palatable to utilize. But because you do have this like front end layer for messaging, you have all these wallets, everything kind of built into that status infrastructure, it just becomes a little bit easier to now just plug in an Ethereum client there as well. So I think there's two layers is like, do people comprehend their whole vision of web three and how that comes together? And then the other one is like, how do you incentivize individuals to run that infrastructure without it being staking? So like, if you're looking at individuals in Bogota or you're looking at individuals in India or you're looking at individuals in Africa for them to actually run those nodes and then decentralize the ecosystem further, maybe there is some sort of like incentive mechanism that we can come up with where they're benefiting without having to put out their cash, which is 32 either is too much for them. Even if they do like distributed validated technology, if they're providing a service to the network, perhaps there is some sort of incentivization that kind of is given to them at that point in the long term of like, okay, we decentralize the ecosystem not to just Europe and America at that point. So I think there's a lot of thinking that needs to be done from like the protocol level or just from the community, like funding mechanism perspective there as well. So it's interesting to see like how we start thinking about these problems over the next few coming years. Now that the proof of stake ecosystem is kind of developing and it'd be exciting to kind of have thinkers like you guys come up with ideas and experiment and evangelize as well. So thank you very much.