 Good, good, good. Oh, yeah, so, right, finally it's settled. So, right, so I guess I prepared this talk for slightly more basic, with slightly more quickly passed to them, and like focus on more important stuff later. So for this audience, I think a lot of the things are clear, like the motivations, sort of this is like our motivations, like we receive requests from like a lot of these blockchain gaming clients, loot boxes, lotteries, NFT crafting. I mean, they have their own use cases, and there is a very nice blog by Chainlink where they describe all sorts of use cases here, and overall the idea is sort of all these on-chain games, they try to generate randomness on a smart contract, right? So how to generate, how do smart contracts get good randomness? This is like again like very sort of cliche slide with unbiased, already been spoken about, unpredictable, publicly verifiable, it's pretty standard stuff. And if you try to use, again this is like very basic slide, so just try to use something like a PRF. I mean, it's obvious problem is you have the secret key exposed on-chain, and I mean it's slightly surprising that it still satisfies these two properties, but not unpredictability, right? Because the secret key is exposed. And it seems to be a huge problem storing secrets on-chain. So we go for like more sophisticated solution where we generate randomness of chain, and here comes the whole sort of story of DVR. So this is basically introduction of how why DVRF sort of comes into play in smart contract based randomness generation, right? Okay, so verifiable random functions. We need unbiased ability where the output cannot be biased, unpredictability, again, very, very cliche slides, hard to predict without the knowledge of the secret key, and then there is public verifiability, which is like slightly more tricky now. We have to generate some sort of a zero knowledge proof, which will verify a input, output pair X and Y, right? So why we go for a distributed solution? So because we offer a distributed VRF service, or threshold VRF, I think Bernardo also mentioned in the last slide, because of single point of failure, right? So if you have only one node storing the secret key, it becomes a single point of failure. It doesn't, I mean, it has other problem, also it's sort of not really good from the ethos of blockchain technology, right? It has to be decentralized, you know? That's sort of the whole technology is all about, right? So distributed VRF, secret key, secret shared in a threshold fashion, like T out of N, and usually using a distributed key generation. Talk more about it. Any T plus one nodes can collaborate and can compute the output, and it should tolerate up to T malicious corruptions, right? So pretty much the standard stuff again. So again, like, so DKG is not the focus of the stock, but just to mention that at Supra, we actually tried to look into various DKG protocols. I think some of them already be mentioned in this workshop. However, actually, just a couple of days back, or maybe yesterday we got this new thing that we have that non-interactive VSS using class groups, and based on that, we have a new DKG protocol. It's like completely non-interactive and publicly verifiable, right? So it's, I mean, maybe let me spend some more time on this. So it's basically, there is this whole sort of literature of publicly verifiable secret sharing, you basically publish encryption and associated proof, and if you try to do it efficiently with El Gamal, the main problem with the prior approaches, which are not class group based, was you cannot encrypt like large data, right? Yeah, so I was talking about class groups, DKG, so it's kind of similar to the standard publicly verifiable secret sharing, and then providing sort of a zero-knowledge proof of correct secret sharing. So if you use standard El Gamal schemes, the problem with El Gamal is basically, if you want to encrypt in the exponent, unless it's very, very small, you cannot really efficiently decrypt, right? That's been a standard problem for El Gamal, and class group basically has this amazing structure that they can enable this. They have this sort of subgroup structure inside which enables this efficient decryption, and this becomes extremely simple. So this is like a sort of preliminary paper, it doesn't have like very rigorous proofs or something, we're working on those things, but I definitely encourage you to take a look here, if you're interested in DKG literature, right? Okay, so now what's the setting? There is this guy who is holding this input X, and every node holding after the DKG, everyone has a share of a secret key, and then they output this partial evaluations, they're being aggregated publicly, in this case is n is equal to three and t is equal to one, so t is equal to one means you need two guys, like t plus one is the threshold, right? And it should tolerate up to one corruption, because we're in the honest majority setting, I'll just explain why honest majority is important. So VRF requirements are there, and additionally for the, maybe this is the new things that hasn't been spoken about in this workshop, maybe spend more time here. So it requires some new guarantees like consistency, that means participating set doesn't matter, like it's similar to threshold signature, whichever sets are participating, any t plus one would produce the same output. Strong pseudo randomness is actually very important property, that now that we have corruptions involved, the pseudo randomness should hold, even if there are up to t malicious corruption, and that becomes kind of an important property to prove actually. And then robustness is again another important property that if aggregation succeeds verification must also succeed. So why this is important is kind of boils down to the fact that each of these partial values, you should be able to verify immediately, because let's say you receive like, I don't know, like if your threshold is 10 and you receive like 20 values, you do not know which of them are actually correct, right? You cannot take like this all 20 choose 10 combinations, you rather verify each of them, and maybe there are like up to 10 of them are corrupt, right? So you should discard all these bad values, you choose only the good values, and once you aggregate correctly, then you publish the verification should work. See it's kind of not desirable if later the verification fails after aggregation, right? So robustness is important, and then liveness is like execution must always kind of a guaranteed output delivery situation, and for this to hold, we need this sort of honest majority that threshold cannot cross, cannot touch actually in what to work, right? Now guaranteed output delivery is impossible without robustness, just because of the fact that you can just, aggregation can choose wrong values, and then after like reconstruction or like aggregation, you'll see actually invalid signature, but it's not detected. So you need to detect everything immediately, right? So it boils down to the fact that you need honest majority plus, you need this robustness to achieve liveness guarantee. So these are like sort of new important properties. Okay, so our constructions are basically, we have a base construction and we have more advanced constructions. So our base constructions basically the glow constructions, which is also probably the Duran constructions, very similar. There are some differences. So the DKG actually we are taking the class group DKG, it's not immediately in place, but we're working towards the deployment. They are the class group base DKG that we're gonna use. And this part is like basically a BLS partial signing protocol, like hashing and raising to the power of partial key. And then for partial verification or robustness, you also generate this zero knowledge proof of correctness, which is basically a Chompiderson proof of equality of exponent there, right? And the aggregation, you verify each of the zero knowledge proof, right? And if they passes only, and because there is an honest majority guarantee and if everybody replies, I'm assuming that everybody replies and there is sort of no network fault at this point right now. And then you will be able to sort of derive a correct value Z after, this fancy Lagrange interpolation exponent and the hash of it is basically the out here output. Verification simple like BLS verification plus the hashing, right? It's that's basically the overall constructions. So couple of points here is that the zero knowledge proof actually is sort of already in the glow paper that there are two options that you can do for this partial verification that you do not attach zero knowledge proof and use BLS partial verification as well. And if I remember correctly, I'm not sure what DRAN, but DFINITY randomness bacon was actually not using zero knowledge proof. So what happens if you do not have zero knowledge proof? So you lose in two fronts. So in other words, if you use the knowledge proof, there are like win-win situations. So you win in the strong seed randomness, you can prove things are strong seed random. Intuitively, why? Because zero knowledge proofs are simulatable, right? And the pairing verification does not support simulability, right? So that's quite crucial here. Plus you have these, since you're not doing in pairing group, this aggregation part, right? You are gaining in some sort of 2.5X, that's kind of approximately, you are gaining in the computation. And that's sort of the already that has discovered in glow and sort of they showed that why this proof fails and so on, right? Again, so I just already mentioned this that this tape ensured robustness. So now let's see like how the framework works actually. So the weird framework that we thought about is like, there is a requester or we sort of, let's say there is a game five organization which is sort of, which is asking for a randomness request, which is making randomness request in the smart contract. And the smart contract to generate the input, I mean more on this input part I'll discuss later. It's kind of tricky how to generate this input and it's like kind of a work in progress. And there is a relay from the supra side. We have this relay nodes, which are kind of free nodes and they're constantly looking into smart contract request and they transferred these inputs to the VRF committees, kind of sends to all the nodes in the VRF committees, get the partial input evaluations back, aggregates can happen in relay or maybe it can happen in VRF committee, doesn't really matter, it's for simplicity, we just draw it, it happened in relay and then the output is sent back, verification happens in the smart contract. So it's important part is that the randomness request itself is in the smart contract and when it comes back, the why, it must be verified at the smart contract. And from our discussions with the clients, the gaming clients, they are really sort of very stringent on the fact that verification must happen on chain. Okay, so maybe this is a good point to sort of so far, we have been only mostly hearing about sort of randomness beacon service and here is the DVRF service, I think maybe sort of we have this also, some question, I'm like not completely clear about these questions, maybe if there's a question we can discuss afterwards. So in our minds, what is the difference between randomness beacon service versus VRF service that randomness beacon user does not make a request. So it's not like on-demand service, right? So there is an input that's being generated, maybe in the smart contract, maybe some other way is just, but this part is surely not there, the user does not make a request. So there is a constant source of randomness that's being produced. Apart from that, everything stays the same. So there is slightly, I mean, it can be slightly problematic, can have some downside in the gaming context because sometimes you need to get several randomnesses as an output and can run them. So there is no basically user control, right? Okay, so there is no user control and further most the synchronization can be problematic, right? So you have to sort of, let's say we agree all of us, like there are like 50 people are playing a game and we have to agree on let's say at 12 AM, whatever randomness will be output will play then, and then there may be some false and synchronization can be slightly problematic, right? Again, like I'm not an expert in like game file or this kind of thing. So if you have a comment, you are welcome to sort of the discussion. Okay, so the first version, the basic version of what we have now, let's talk about how do you produce the input? Okay, so input is again, we came up with like bunch of stuff that should be in the input and after lot of discussions are engineers and this is like kind of completely open area and sort of I was discussing Aniket and I said like this is a good audience to sort of put it there so that there should be some discussions on this and maybe there should be some standardized format of input or some other things, right? So just whatever, right now we have some stuff that kind of you can expect like what to put in like there is a user input because of more user control, there is user input field, unique nonce each time, you generate unique nonce can be a counter even, right? Chain ID that means sort of whichever smart contract if it's Nepolygon or Ethereum, it should like take that into account. User ID specific to a user, user can have multiple functions so callback function name or ID and finally there is block hash which is like we are slightly skeptical whether it should be there or not. So traditionally before all this VRF service people used to use block hash as a source of main source of randomness and I guess like lot of people knows but still let me reiterate that if you use block hash it can be really problematic. I think there are actually some real world attacks maybe you guys know better than me but if you use block hash, the miner can know block hash ahead of everyone else. So the unpredictability aspect is really weak there, right? So now in this context whether should we have block hash or not this is like kind of I personally think that it might be dispensed off but not sure actually. Okay, so now actually let me talk about an issue with this basic service that we found and that's sort of our motivation to sort of add more features to it, right? So in the basic service what we have is that the output is exposed immediately. So I make a request and immediately the output comes into the, you know it becomes public basically and it comes on chain, right? So one issue with that is that it cannot be made in advance, right? Let's say I want to use like an hour from now if I say that okay now sort of I want to do it like maybe there will be some other computational overhead at time or something cannot be made in advance, right? So I have to like do it right at that time, right? And wait for like there is some network delays and some synchronizing issues as well. And then there is some sort of a reusability issues. So by reusability what I mean is kind of a very simple thing. You use the output Y as a seed to generate like further randomnesses and I think if, as part of my discussion with the engineers or currently chain link VRF service actually offers that but you have to use all of these things like at the same time because once Y is known this there is no unpredictability left. The entire thing is predictable, right? So it cannot be really reused later, right? So with this problem sort of in mind what we proposed is a different like slightly upgraded service with output privacy, right? So what output privacy means is user makes a request. It's like very minor changes in the overall architecture and the smart contract, the input is being generated after user request and the input is now sent back to the user. When the user sends a blinded input X prime and note that both X and X prime are already visible to everyone, right? And the smart contract, right? They're public. So the purpose of blinding is not to hide the input but to hide the output. Now after blinding what happens is basically the entire operation whatever I showed in the previous slide stays exactly the same. So whatever was happening before now again it's happening but now with respect to the blinding input. So the entire VRF is being computed with respect to X prime and then there is Y prime being generated and verification on-chain can be done with respect to X prime and Y prime. Now Y prime is completely blinded, right? And nobody has any clue what the underlying Y is except the user. The user has randomness and which is kind of a blinder and after it's being sent to the user, actually there should be one more step of blinding, the user can unblind it and after that it can basically send to the chain and can be verified and so on. Now it can be made in advance of course because it stays blinded the entire thing, right? Hidden and it can be reusable but there is a catch here that the verification must be delayed. Why this is true? So how we want to reuse it as a seed that you can hash Y with like this one, two and so on to generate the Z1, Z2. Now to verify Z1 with respect to a request X, right? You have to know Y and once you know Y you also know Z2, Z3 and everything. So if I'm giving, say if I have like Z1 and Z2 together once I reveal the proof for Z1, I have to, there is no unpredictability left for Z2, right? So there is some advantage in gaining so we can just make one request and after that we can generate multiple things but the verification cannot be done until I'm sort of this round of requests are exhausted. So if I'm, I can use them like in several rounds of like say NFT crafting or sort of this reward-based gaming where you allocate reward at each round of games but after let's say 10 minutes I reveal that okay here are the 10 different randomnesses I use, now you verify. So for this 10 rounds sort of you have to believe me that I'm doing it correctly. I mean it can be sort of mitigated by using some staking mechanism and so on that's kind of again a toss up right now but that's sort of overall the idea, I mean from the architectural perspective. There is another catch here is the requester knows ahead of time, right? So in applications where the requester can benefit from knowing the randomness ahead of time it can be problematic, right? Especially for example, if there is some betting going on like people are, so I just wanted to say that if there is some sort of a betting event going on that the different players are betting and the house will allocate with like let's say for the correct bet or something, right? For this kind of event, these kind of games it becomes a bit problematic because I can sort of put my dummy player and can have the request ahead of time and ask this guy to sort of say that okay I get number 10 so you just say 10 and I can always make my favorite player win but for sort of gaming context where you sort of allocate these like loot boxes or like some NFT you are sort of awarding in each game this might not be possible or if there is no collusion between the house and any of the games, right? So there are multiple. So you have to be application specific. We plan to like really sort of have a documentation of all these things and the sort of users guide manual kind of for this kind of service but that's a set of art. Okay, now it's gone again, but I made this work so that's already good. Okay, good, so where am I? Okay, here. Good, so let me just briefly talk about how to achieve this sort of output privacy. I mean from the construction perspective it's kind of not that hard. I mean there are like a few more added algorithms. There is a blinding algorithm. One crucial thing here is that the blinding should come with a zero-knowledge proof as well because if I sort of send you something which is malformed, right? The security can be messed up a little bit. And this is like slightly from a provable perspective if I, you know, it happens is that, you know why do you use HX to the K? Like what's the use of random oracle here, right? Why do you consider random oracle? Because I want to make sure that H of X maps to a random group element for which I do not know the discrete log. This kind of things becomes important in the proof, right? So the provable security guarantee we need to make sure this is a correctly formatted blinded pair, X and X prime. For that we can just attach a snor proof here. That's basically, you know, proof of correct blinding and that's being checked at the, and the aggregation can stays exactly the same like I mentioned that apart from the blinding and un-blinding part, I think this is like more interesting part actually. It should be, okay, so just I wanted to say that the blinding factor and the user needs to keep the state. So the blinding factor is needed for un-blinding as well. And if you're familiar with sort of, you know like just quickly doing that, you know so that this doesn't happen again. So just putting it there, right? So if you're familiar with like oblivious PRF construction, this is very similar to oblivious PRF but note that we are not trying to hide any input here. Oblivious PRF is all about hiding inputs. And here we are blinding, but the goal is to always hide the output, right? So the construction is very simple. Like choose a random blinding and, you know, just hash it and raise to the power of that blind. It becomes sort of a completely random group element and everything now stays in a sort of, because of sort of homomorphism in the exponent, some sort of linear homomorphism going on. It's basically happening like the, you know, BLS signature or whatever can happen, right? So it's very similar to threshold BLS or oblivious PRF like, you know, nothing very fancy. But the setting is slightly different. I told you. That's why we need to be careful. At least in the proof we need to propose a new sort of modeling. And then, yeah, so I just wanted to say that the construction is very similar but there are subtle differences and we need to rely on sort of one more type of assumptions because of this, if you want to reduce, which is different from BLS where it's standard kind of bilinear CDH kind of assumption already works. So it's slightly different here, right? Yeah, so basically we want to merge these two service so that there will be like P or NP, not the usual P and P question, but P stands for private and NP stands for non-private. So the user can, on demand, it can make different requests like either private or non-private. And based on that, we have the dashed line. So the smart contract should be designed so that it will just send back for the private request or just wait for the relay to capture this for the non-private request. And the other suffer basically staying the same. I mean, there would be some engineering discussion that should go on there. There is an attack, actually, with this kind of a framework type of attack that can happen if the relay is corrupt or someone who is looking into the smart contract because note that the input is already public. So how do you know it's a private or non-private, right? Like, let's say there's a private request, but relay or someone just ask, make sure that the relay makes a non-private request. So then the output is immediately public, right? So there could be like other way we can sort of prevent against this attack, like this output recovery attack for this merge service. But whatever we thought about, like change the input format to sort of include this flag and you can just adjust this flag to like switch to one or zero and make sure that the entire input space of the private and non-private are completely different. That makes the things kind of go on correctly. So we're doing this, I mean, we are putting this privacy aspect and whatnot, but the question is who cares? I mean, so I mean, we have this ongoing discussions that we are talking to this particular gaming service that, well, like our more applied people are talking to them. I was just listening to the meetings mostly, but DeFi Kingdoms, they're basically a on-chain gaming platform, and they were not using VRF because of high gas cost. And for each time they have to sort of get this thing and there will be gas costs associated with it. And they are not happy about the fact that what Chainlink was offering that you can just hash them, but you have to use them together. So what we offered with this privacy, sort of output privacy is that you can make it more efficient that you can just get one seed and can keep it hidden until let's say next 10 shots. And they are pretty excited about it. So we have a client for this actually, that's kind of important in the startup setting. Okay, so I'm just summarizing. So basic idea is basically using glow and we have a new DKG protocol. I ask all of you to take a look. It's in the A-print, but it's not super formal right now. We'll make it more formal. We have these output privacy part. We call it FlexiDant. I forgot to mention the cool name that we invented. So FlexiDant is actually a conditional accept at CCS 23 right now. And then the big open question that sort of, I want to put as a sort of, so that the community should look into more is the, it's very pertinent to the VRF setting. It doesn't come up in the randomness bacon setting. If I understand randomness bacon service correctly, the randomness bacon is basically, the input really doesn't matter because it's controlled by the environment. But here it's generating a smart contract, it's user influence. So there it becomes really problematic that what kind of input formatting and if we change the input formatting, maybe there are attacks or something from the framework side, even if not from the VRF computation side, right? So with that I sort of conclude my talk. This is the QR code. If you scan it, you will actually see that webpage. So thank you if you have any questions. As usually for sure, networking is, I have one question, what is the governance of nodes that runs with VRF? Sorry, I didn't get what? How do you manage governance and membership of, there's no, okay, that's a good point. So right now, yeah, no, that's a good point. So right now, I mean, we are actually we're sort of launching the mainnet alpha soon, right now, whatever being deployed is basically, they're all nodes are internal and they're sort of kind of geo distributed over three continents. We have nine nodes actually right now that I know. This is like very experimental setup right now. So whatever happens is that at some point, they will probably go for like 100 nodes or something. That's the plan overall. But the governance part, I think, yeah, you're right, it's still being figured out. Right now it's being sort of manual in some sense. It's not really very automated. It's there is the scaling and all these things needs to be done. Yeah, but yeah, I mean, be curious to know how this is done in like DRAN or maybe other services. Thanks for the talk. You did a nice job of explaining several circumstances in which this might fail. I was curious if those are theoretical or if you actually had any sort of hacks in the past or any breakages. No, I mean, this thing is actually, I mean, the full scale deployment is still underway. So we do not have really experienced with this user experience. So maybe one year from now, if you ask me the question, I can answer better. But these are all sort of, you know, whatever the speculation or theoretical thinking that this might happen or this might occur. And some of them actually came up from the discussion with these clients, potential clients and so on, yeah. Wonderful. Thank you very much, Prachet. Thank you.