 Welcome back. Our next speaker is Chance Hudson and he'll be talking about how to build an identity ecosystem on Unirep and Unirep itself is an anonymous reputation system. So, Chance. Thank you. All right, so thank you all for being here. I'm excited to talk to you guys today about a protocol I work on called Unirep. So, before we get into it, this is a quick roadmap of how this presentation is going to go. First, I'm going to give a semi-technical overview of what Unirep the protocol is. Then I'll talk a little bit about improving the user experience of ZK and blockchain applications in general. And then I'll talk a little bit about how we can scale ZK on the blockchain and sort of where we're at with the capacity right now. Right, so let's dive into it. Unirep is short for universal reputation and you can think of it as two different things. First, it's an identity system that gives you anonymity. And it does this by creating public keys that change over time. The second component is this attestation system. Within the system, we have a testers, which you can also think of as applications or just smart contracts. And these attesters give reputation to users. And you can think of it as a tester to reputation like ERC contracts to tokens. They create the system to find how the reputation is distributed and spent and destroyed and everything like that. And so we define reputation as two different integers, positive and negative reputation. And we do this so we can represent negative reputation like net negative reputation in ZK proofs and in smart contracts without having to deal with signed integers or wrapping around unsigned integers or anything like that. So that's one component of the user state. The other is this graffiti value, which the attestor can use for anything they want to do within the application. And this is just 32 bytes. And yeah, the attestor can use it as they like. And so one example use case of this is an attestor can allow users to register a username. And so the user requests a username and then the attestor attests giving the hash of the username as graffiti to the user. Now, when the user takes an action or makes a proof, they can prove the preimage of the graffiti and move from anonymity to pseudonymity. So that's one example use case that's relatively simple. A more complex example would be storing the state root of a merkle tree inside of this graffiti. And then the attestor can extend the ZK proof system to prove things about the contents of that tree that's in the graffiti. So for example, they could use an incremental merkle tree to give achievements to the user or to track actions the user has taken, anything like that. So very extensible system. So the two main properties of UNREP are anonymity and non-confidentiality. That means we can see everything that's happening inside of the system, how much reputation is being transferred and whatnot, but we don't know who's doing what. So let's talk a little bit about the UNREP identity system. We build on top of a system called semaphore, which is also developed by the privacy and scalability team. And with semaphore, we have this public private key system. That's the most simple explanation. It has two secrets, a trap door and an alifier. And we define a public key as the hash of the hash of those two secrets. We also call this an identity commitment in semaphore talk. And we use the Poseidon hash function to calculate these values, which makes it a ZK-friendly protocol. And that means that you can extend it to do arbitrary things. So in this example, we have just a public private key system, which isn't particularly useful. But using a ZK proof, you could extend it to do signatures, for example, by writing a proof that proves the secret values, as well as the hash of some data that you want to assign. Yes, you can extend ZK proofs in arbitrary ways like this. Another way you can extend it is by building something like UNREP. So we have these public keys that change over time. And we call these epoch keys because they're valid for the length of one epoch. And this is some amount of time that is set by the attester in question. And so the epoch key is the hash of the nullifier, attester ID, epoch and nonce. And so as you can see, because the epoch is part of the hash, it changes every epoch. We also have this nonce value, which is a value between one and zero and two by default. And it allows us to give the user multiple epoch keys for a single epoch. So if a user wants to commit an action and then wants to commit another action but doesn't want to link their identity between the two keys, they can use different epoch keys and still be able to prove the same amount of reputation and whatnot. And just like some of this system is ZK friendly and therefore extensible, so if you wanted to make a signature system using these epoch keys, you would use the exact same approach where you prove control of the nullifier and the public signals, and then you pass in whatever you want to hash and it all gets output. Yeah, so now I want to talk a little bit about data structures that we use in Unrep. So we have this identity system, but how do we assign reputation and graffiti to users in the system and then continue to prove it? So we really have two structures. We have this state tree first, which controls or measures whether or not a user is a member of the current epoch. And as you can see, it's an incremental mercury that we store completely on chain. And that the leaves themselves are the hash of this private identity nullifier, the tester in the epoch, and then the reputation the user has at the start of the epoch. So this tree controls or this tree has leaves that a user inserts into when they join the current epoch. So for example, when a user signs up, we insert a leaf or the user makes a ZK proof and then creates a leaf with zero positive and negative reputation because they're just joining the system. So they start with zero. The other structure we have is what we call an epoch tree. So the state tree tracks the starting balance for the epoch and the epoch tree tracks the amount of reputation that was received during the epoch. And as you can see, it's a sparse mercury tree. And we store only the root of this tree on chain. And we use ZK proofs off chain to insert and update leaves in this tree. And then we just post the new root along with the ZK proof to the chain. And you can see that the leaf values in this tree are the hash of the total reputation owned by this epoch key. And so we determined the index of the user's leaf in the tree using the epoch key. So in the previous slide, you saw that the epoch key is just some hash of some arbitrary data. And we're able to apply or we're able to determine a leaf in the tree by taking the modulus of that hash across the number of leaves in the tree. So if we use this exact tree, we would take the hash modulus two to the third because it's only three levels deep. But of course, we did that, we would have lots of collisions. So when we operate this tree in production, we use a depth of 128. So we take the hash modulus two to the 128th. And now everyone gets a unique epoch key. And we have 128 bits of collision resistance. So the whole idea behind UNREP is users have these identities that are valid for an epoch. And then at the end of that epoch, they pack up their reputation. And they move to a new identity in the next epoch by inserting a leaf into the new state tree. I didn't say this before, but we have a copy of these trees every epoch that we use. Right, so let's change gears a little bit and talk about the user experience for zero knowledge applications. So we have these proofs with UNREP, and we want the user to be able to make these proofs inside of like their browser, for example, on a computer. So here we have a graph of the proving time for proofs of various sizes on a few different devices. And yeah, you can read it pretty well. So you can see that there's a sweet spot below 30,000 constraints, where any proof you make is going to take less than five seconds on most modern devices. That purple line at the top is an iPhone from like 2016. So like older mobile devices can still do that in less than 10 seconds, which is pretty acceptable performance. So where's UNREP? Where are our proofs on this graph? The first proof we have is a signup proof. It's very small. It's 700 constraints. And all it does is output a hash value. So less than one second. A little bit bigger, we have an epoch key proof. This proves control of an epoch key and also proves a leaf in the state tree. So it's a little bit bigger, 3000 constraints, still less than one second on most devices. And way over here, we have this user state transition proof. And in this proof, we add up the value in the state tree leaf, as well as the values in all of the epoch tree leaves, and output a new state tree leaf with the sum of the reputation of the user owns. So we have to do multiple inclusion proofs over trees that are quite large, but we still end up with about 29,000 constraints, so less than five seconds on modern devices. And we can also execute this proof in the background so the user doesn't have to know about it and doesn't have to wait. So I think a good goal for ZK applications and for blockchain in general is users not being aware that they're using the blockchain while they're using the blockchain. So a lot of you probably use websites or applications like Spotify, or Twitter, or Reddit, or Stack Overflow. Just raise your hand if you know what kind of database those applications are using. Right, no one raised their hand. Okay, one person raised their hand. We have at least one nerd in the crowd. But for the most part, users don't really care about the data structures that are backing the applications they're using. And blockchains are really the same. They're just databases with different properties. And so the user shouldn't have to be aware that they're using the blockchain. So this is the architecture of a traditional DAB. And as an engineer, I really like this architecture because it's simple, and it's sort of unprecedented before blockchain. But for the reasons that I like it, I think that users kind of hate it because they have to learn about the blockchain and then they have to learn about wallets like MetaMask and transactions and gas and gas prices and Quay and Ether and they have to get Ether. And it's a whole thing and it's a lot for them to learn just to use a single DAB. So luckily, if we use ZK identities instead of wallets, we can build this more traditional three-tier architecture where we introduce a relay that bundles the transaction and sends it. So the flow would be, the web app generates a ZK proof, gives it to the relay. The relay creates a transaction and then posts it to the blockchain. And the economics of why the relay would do this are sort of, there are a lot of different schemes you could build like subscription models or just like free trial models, all sorts of different things you could do. And so one note about like this architecture, the relay is not a trusted entity. The relay can sensor transactions that can go offline. It doesn't matter the web app can or the user can always send their ZK proof with a different relay or broadcast it to the blockchain themselves. The relay also can't compromise the ZK proof itself because if they change anything about it, the proof itself will be invalid and the proof determines what the user wants to do on chain. So I think a good goal for this would be from a user experience perspective, a user clicks a button and then in less than five seconds we show a loading animation and we generate a ZK proof, give it to the relayer. The relayer then packages it into a transaction and gives it to an L2 node who then returns an instant finality guarantee. And at that point we can stop the loading animation on the front end and say, okay, your action is complete. And even if we get a weak economic guarantee from this L2 node, we can still do this on the front end and then have an alternate code path where if the sequencer doesn't include the transaction, we show a notification saying, oh, this action failed or do you want to try again or not. And hopefully that code path is relatively cold and sequencers include transactions as they say they're going to. So how can we build an ecosystem using ZK proofs in this sort of abstracted architecture? One approach I think is sort of detailed in this diagram. You can see there are three different attestor applications and a user's browser. And each attestor application is managing a unique identity for the user inside of the browser local storage. And the advantage to this is we can treat these identities more like Web2 authentication tokens and less like Bitcoin or Ethereum private keys. So instead of prompting the user for permission to do a signature or make a transaction, we just in the background make these ZK proofs and give them to the relayers. And at the same time when we use different identities for each of these websites, if one of the identities is compromised because the website injects some malicious JavaScript or does something like this, the damage is contained to that single attestor. But at the same time, we want attestors to be interoperable. So if, for example, this review attestor wanted to get a proof from the cyber resistance attestor, how could we do that? And the answer is basically a lot for ZK. So in this example flow, I'm trying to sign up for this review attestor. And the review attestor wants a proof that I am a human being in the form of a reputation proof from this cyber resistance attestor. And so in this flow, I create the sign up proof from the review attestor and then I get redirected to the cyber resistance attestor and I get prompted to make a proof from this identity. And in this case, we shouldn't operate silently. We should definitely request that the user make this proof because we're going to hand it to a different third party. And so if the user says yes, then we prove that we have like the reputation and we sign the hash of the sign up proof to prove that we're the same person. And then we get redirected back to the original application and we can continue sign up. So using this flow, we can get ZK proofs from different applications into different sort of origins. Okay, so this is part three of the presentation. How can we scale ZK? So first I want to talk about where we're at right now and the limitations of our current infrastructure. So when we talk about scaling ZK, there's two limitations. One is the call data itself and two is actually executing the verification on chain. So let's talk about call data first. So for graph 16, we have about 130 bytes per proof. And for Planck, we have about half a kilobyte per proof. So assume that we're talking about EIP, like a post EIP for a world. And we have two megabytes per block that we can use for blob data. So at that point, we're able to do 1300 cross 16 proofs per second and 330 Planck proofs per second. That's not terrible. So let's open door number two and look at the verification costs. So graph 16 and Planck both cost about 250,000 gas to verify. This isn't totally true. Graph 16 is a little bit cheaper and they both scale up with the number of public signals in the proof. But for this example, we're just going to say they both cost 250,000. So the Ethereum mainnet is doing two and a half million gas per second right now. They're doing 30 million gas every 12 seconds, which comes down to this. So we get to verify 10 proofs per second on a Ethereum mainnet. So on Arbitrum and L2, for example, they're doing 7 million gas per second. So that number is bumping to 24 proofs per second. So we see an obvious bottleneck here. It's the verification cost on chain. And these numbers are also extreme upper bounds. This assumes that we're filling entire blob blocks with two megabytes of ZK proofs and we're filling entire blocks with verification of ZK proofs and not even factoring call data. So how can we scale UNRUP and how can we scale ZK proofs in general, given those bottlenecks? So this is a proof from the UNRUP system where we're generating a user state transition. And as you can see, there's I think seven public signals. And for every user that wants to join a new epoch, we make one of these and put it on the blockchain and verify it. So how can we make this a little bit more efficient? We can do recursive proofs. So we have the user to make a proof and it gets sent to an aggregator and then the aggregator proves that four proofs are valid and outputs one proof. And so we're able to reduce it by, in this case, a factor of four. There are other things we can do like to reduce the public signals as well, like cue the ZK proofs on chain and then form a hash chain of the public signals and the proof stuff to reduce it so that we don't have to output all these public signals at once, which will also reduce the verification cost. And so recursive proving is really important because it changes the approach from scaling the throughput of decentralized network to instead scaling off-chain computational power, which we're much more able to do, like Intel and AMD do this every year. We can also build ASICs to make proofs very quickly. And then we're able to see the sort of improvements that we want to see. Like if we're able to bundle 10 proofs at a time, we get a 10x improvement, 100, 100x, etc. And this does introduce a little bit of complexity because we have to deal with a proof aggregator and now the user potentially has to wait on that or potentially optimistically evaluate the aggregator. But this is a decent approach to scaling. Right, so now that's most of the presentation. I'm just going to talk about some attestor ideas that I think are kind of cool that we can build with Unwrap or the other Unwrap protocol. So the first is just like ZK DAOs. We can keep the balance hidden or the balance controlled by a user hidden within the DAO. We can also do things like vote for proposals anonymously. A lot of interesting ideas or a lot of interesting things that can be done with this. The next is anti-seidel reputation. I use this as an example in one of the previous slides. But what you could do is basically take like proofs that you have a Web 2 identity and give your reputation for that or potentially use like BrightID or prove that you have like Po-Apps for example and just get reputation and then use that to sign up for other attestors. The third is like really simple and generic, which is a recommendation system as a Web app. Everyone has things that they use that they would want to recommend to other people. Having like reputation for good recommendations seems like a good use of the system. And then the final one is one that I thought of when I was given the DevCon Po-App. I really hate claiming Po-Apps on chain because I'm just giving people a history of like the places I've been in the real world, which I think is kind of weird. So we can use Unwrap to like claim Po-Apps anonymously and then we could make a proof that you have like a Po-App in a set. Like for example, I could prove that I have two Po-Apps from the set of all DevCon Po-Apps. Something like that. And then just so nice to have things for sort of the ecosystem like infrastructure wise. The first is like a ZK directory. This just a directory of the hashes of proofs and human readable descriptions of what the proofs do. And this would be really important if we're going to owe off between different applications and request ZK proofs because they're potentially requesting just random proofs that are specific to their application. And so applications should have a place where they can request information about a proof and then return it to the user. The next thing is plonk. We already have this but I just want to talk about it a little bit more. The most important part of plonk to me at least is we don't have this phase 2 trusted setup that is circuit specific. So with Unwrap we want people to extend the proofs that we've written and write their own to build their own functionality. And with ROS16 that's very difficult to do because they have to run a trusted setup ceremony for the circuits that they build. And this is a huge amount of coordination and effort. You can't really expect most developers to reasonably do. So with plonk we cut all that out, we get to just use some phase 1 trusted setup made by some trusted entity or some trusted entities and then you have secure proofs. And the last thing is easier browser proofs. So we have proofs in the browser using Snark.js but you have to like configure Webpack really specifically and it also uses like Zkey and Web assembly file and second Web assembly file for a curve. And it would be much easier if we had a tool that just sort of bundles all of that into a single Web assembly that we can run in the browser. Just pass in the signals and get the proof back. This would give us like free asynchronous operation as well. Yeah, it would just be a nice thing to have. Yeah, that's pretty much the end of the talk. We have a few events happening related to Unwrap. So first we have a Unwrap workshop on Friday at 10.30 a.m. That's going to be on the first floor at the ZK Community Hub. We also have a demo at Thursday at 3 o'clock in the same place. Big thank you to Zero X Park for putting on those events and inviting us to participate. And then if you want more information about Unwrap you can go to githumb.com slash Unwrap where you can scan this QR and it'll take you to our organization homepage where you can find documentation links links to our Discord and a link to a demo application that we have running on this protocol. Yeah, that's it. Any questions? First of all, cool presentation. Thank you for the information. I was wondering from a product standpoint or maybe from the user perspective, how do you explain the need to sequentially create new identities in order to remain anonymous and also if there's a way to abstract that, maybe save it in the session in the browser, you know, abstract the user away from such an involved mechanism of creating like a pseudo anonymous name every epoch. Sure. Yeah, so the reason we do this is because they need to have they need to be able to prove a leaf in the epoch tree to claim reputation. But I would say it's not as involved as it might seem. We can do this silently in the background in the browser. So like the user enters the web page and then the web page checks if they need to generate a new identity. And if they do, they generate a ZK proof in the background and submit it to a relayer and it's done. Yeah, there's not a way to make that less manual without changing the architecture of the system like pretty substantially, though. Hello. So how often do you see the state transition happen? Is it going to be per epoch or maybe they can be some trickery there to maybe avoid this heavy load in computation? Yes, the state transition happens any time you want to move to a new epoch. So if you are participating consistently, then yeah, it's every epoch. We can adjust or testers can set the epoch length themselves, though. So this could be like pretty short, maybe one hour or it could be like a week or it could be a month or anything like that. And we're also planning to make it so a testers can set the max and odds value to change like the number of keys the user has per epoch. So that would sort of make it so that the epoch proof is longer. But you also have like longer epochs. So there's sort of a lot of tuning we can do with that. Any thoughts about using DADs and VCs standards in this implementation? Any thoughts about using what? Decentralized identifiers and verifiable credentials from the W3C. No. No, I have to look into that. I'm not aware of actually that. But thank you. Thank you for that. Quick question. You talked about grad 16 verification after EIP for it for four. I would be curious to understand if you have like more detailed thoughts in particular two questions. I was thinking of like one if you don't have your, I guess, proof in call data and it said it's in the blob. You would have to verify there's some batch proof, right? Opening your God table in Omeol inside the snark and then doing recursive proofs or I guess what is the exact set up you're thinking of? Do you mean grad 16 or do you mean clonk? Either one, I guess. Right. So sorry, that might be a little bit hard to hear, but it's just like a question about how do we aggregate these proofs or how do we sequence them for aggregation? In a post EIP for four world. Right. Yeah. Yeah. So I sort of touched on it a little bit, but I didn't talk about it much. But one approach is users can send the public signals of the proof and then the hash of the proof onto the blockchain and then we form a hash chain on chain and then the aggregator is able to make a recursive proof using that data and only put a hash chain on chain. But this still involves like the aggregator receiving the full proof and then like calculating the hash. We honestly, we haven't gotten. We're not really close to doing that yet. Like we're proving, especially for like solidity provers are a little bit far out. But yeah, it's probably going to be some sort of optimistic system. So the user experience is pretty good. Thank you. Thank you all.