 our next presentation by Kathy Yun, implementing a zero knowledge proof or how to write bullet proofs in Rust. Give it up. Hello everyone, thank you for being out here. To change gears a little bit, we're going to do some math together and it's gonna be fun, hopefully. For some context, I'm Kathy, I worked with a really awesome team at Chain and Interstellar of Henry and Oleg, and together we implemented bullet proofs in Rust and built a bunch of protocols on top of that. And today I just wanna share some of that journey so that hopefully you can learn maybe, find interesting, pick up some tips and have fun. Cool, so the outline is that first I'll tell you a little bit about the story, how we came to decide on implementing bullet proofs and what tools we used, and then I'll look into the code. I'll show you a little code snippet from our range proof code. I'll tell you about what we were able to do because we implemented it in Rust, and I'll talk a little bit about what we built above and beyond using that protocol. So first, let's talk about motivation. Why do we care about zero-knowledge proofs? So zero-knowledge proofs have actually been around for a really long time, but they've been getting a lot of interest recently because of their application in blockchains for blockchain privacy. So what does this mean and how is this useful? So in a blockchain transaction, we have some inputs, some outputs, and for a financial transaction at least, you wanna make sure that the sum of the inputs balance out with the sum of the outputs. So trivially, when it's non-encrypted, you can do it just by adding and checking. So that's cool, but what if you want to encrypt or hide what your inputs and outputs are? So then it's a little bit more complicated, and one way that you can do this is by using an additively homomorphic commitment. And so in the bulletproofs paper and in a lot of confidential transaction formats, people use Peterson commitments. But basically what this allows you to do is it allows you to check if the commitments to the underlying values add up. And if they do, if A plus B equals C plus D, then you know that the underlying inputs add up to be equal to the underlying outputs with very high probability. So that's nice, but as I noted, this is actually a broken scheme. And the reason for that is you can actually commit to negative numbers, and therefore you have an integer underflow, which means that you can sort of take your $10 or Bitcoin or whatever and go spend that even though you should only be able to spend nine because you only had nine in the input. So that's bad, and we don't wanna let that happen. But we still want to use this really awesome additively homomorphic commitment scheme. And so what we can do in this case is we can add a zero-knowledge proof. I guess I should explain here what a zero-knowledge proof is exactly, a zero-knowledge proof is a proof that a statement is true about some input without having to reveal what that input is. So specifically in this case, an example of a zero-knowledge proof that we can use is a range proof. So a range proof is a proof that a certain input number is in range that you can verify without knowing what that input number is. So here you can make a proof that three is in range without revealing that it's actually three. You only roll a commitment, which is C. Does that make sense? Cool. Cool, so you use this zero-knowledge proof and that way you can prove that it's in range and if, for instance, you're making a commitment to a negative number, this range proof would actually fail. So this actually makes our confidential transaction not broken anymore. So that's really cool and a lot of blockchain protocols use this general scheme and it seems to be working so far. And why do we care about bullet proofs in particular? So bullet proofs is just one of many knowledge proof protocols, but the reason that we were looking at it was because it has really nice properties that we wanted in the context of a blockchain. So it has, or blockchains require a pretty constrained proof size if you're using proofs because all the nodes that are full nodes have to receive and verify the proofs and so you have to transmit all of these to all of the nodes. And bullet proofs does provide that. All of the proofs are in O of log N and usually less than one kilobyte. And it has really fast verification which is important because all the full nodes have to do verification and bullet proofs provides really fast verification as well as the ability to aggregate multiple proofs together into one proof that verifies more quickly as well as the ability to batch multiple verifications of multiple proofs to have them basically run in parallel faster. And lastly, a really nice to have property is that you can actually have ad hoc logic. So you can code up a range proof but you can also code up multiple other proofs of different statements. And you can do this without having trusted setup. So for those of you who are familiar with ZK Snarks, for instance, those need a trusted reference string which is why you have to have the secret setup like ceremony and that's sort of annoying for setting up new logic and bullet proofs doesn't require that because they don't require pairings. So that's why we at Chain picked bullet proofs. We were building a confidential blockchain protocol at the time and so we said, hey, let's implement this paper. It just came out at the time in 2017 and then next step was to actually understand how the paper worked and that sort of felt like this. The paper says we would like to make a proof of these statements and then it says, okay, here's your massive verification equation that is 30 lines long and just take it and run with it. And so as someone who's implementing the paper, it's nice to actually understand how the math works so that you can double check that it's right. And so we really tried hard to do that. We spent I think a full, basically just writing down theories on the whiteboard of how you go from step one to step two, being like, no, that's wrong and then trying again and then trying again until finally we actually arrived at something that was fairly satisfactory. We sent it to Benedict, who is the author of Bullet Proofs and he was like, cool, that looks good. And then we wrote it up on online notes so that no one else has to go through that and they can just read our notes and follow along step by step as to how you get from the initial assumptions all the way to the final verification check. So hopefully this is very easy to follow along with and walks you through all of the different manipulations to the initial assumptions that you have to do to arrive at the final math. So I guess you might be curious of what is that actual math? So I'll give you a little bit of an overview. I don't want to put everyone to sleep so I'm not gonna go too deep into the math but hopefully just enough to give you like a little intuition of how it works. So the central building block of Bullet Proofs is this really efficient inner product argument. So as a refresher, an inner product is if you have two vectors and you take the sum of the entry-wise multiplication. So the inner product of A and B equals C is the statement here and an inner product argument is basically like a way to argue to someone that in fact the inner product of A and B is C. So like super naively if I was just trying to prove that to you I could send to you the vectors A and B and then send you the scalar C and then you can do this math yourself and you can check like either yes it does equal C or no the inner product of the two does not equal C. That's pretty easy and that would take O of N space for me to send this proof to you it'd be O of N where N is the length of the vectors. So that's like pretty inefficient if you have very large vectors A and B and what Bullet Proofs actually introduces is a really efficient way to make this argument and this efficient way actually takes O of N O of log N space instead of O of N space and the intuition for how this works is that first you start off with the vectors A and B and you wanna prove that C is the inner product of the vectors A and B but at every step you like do multiple rounds and at every round you actually have the size of vectors and B. So you make A prime and B prime and the way that you do this is you take a random scalar X and you multiply the first half of A with X and the second half of A with the inverse of X and then you add those two. So by adding those two you now create an A prime vector that is half the length of your original A and you do something similar to B to make B prime and then you have B the inner product of A prime and B prime and then at this step I'm not gonna go into the details but you basically make two commitments and then you send those commitments to the verifier and then you repeat this step over and over again until you end at the base case where you just have two vectors of length one. So as you might have guessed because you're having the vectors at every step and every step you make a constant number of commitments this now takes O of log N space instead of O of N space. So that's actually the core tenant of how bulletproofs is able to be so space efficient. So you might be wondering why do we even care about proving that C is equal to the inner product of A and B and the reason for that is we can actually represent basically any statement we want as an inner product argument by applying math and cryptography to it. So as an example I'll walk you through a little bit of how we can do this to arrange statement. So here we start out with this statement that V is between zero and two to the N. We wanna like apply some math and cryptography to get it in the form of an inner product argument where if and only if this inner product argument is true if and only if C is equal to the inner product of A and B then with a very high probability we know that V is in the range of zero and two to the N. Does that make sense? So that's our goal here is to like do this jump between the left and the right. And the way we do that is actually by observing something really interesting which is that if V is between two zero and two to the N you must be able to represent V as a binary number of length N. So this is just by definition of binary numbers. So for example if V is seven we know that seven is between zero and two to the N where N is four and therefore you must be able to represent V as a binary number of length four, zero, one, one, one, cool. And if V is outside like greater than two to the N then we should not be able to represent it as a binary number of length four, cool. So let's call this binary representation of V, A sub L. So we can actually just say that V, the secret value is the inner product of A sub L and the vector two to the N where that's two to the three, two to the two, two to the one, two to the zero. Cool, so now actually it seems like we have achieved our goal of representing our initial range statement as an inner product. But not yet because you can do something malicious. Right here we're assuming that A to the L is all zeros but what if A sub L actually contains 100 in one of the digits? If that was the case then A sub L would not just be a binary representation and V could actually be incredibly outside of the range zero to the N while still having this inner product argument V equals A sub L and two to the N be true. So basically we want to somehow guarantee that A sub L is only comprised of zeros and ones in order to make sure that V is actually in range. And the way that we do this is we add a second statement. We say that A sub L times A sub L minus one should be equal to zero for every index in A sub L. So if you think about this, this statement will only be true if A sub L is comprised of zeros and ones. If it was for example had a 100 at an index then it'd be a 100 times 99 equals not zero. So this checks that the bits are actually bits. So we continue to sort of add sort of, we continue to add these checks and different statements to our collection of statements. And then we add some more blinding factors which basically blinding factors are a way to make sure that the secrets such as V and A sub L remain secret. And then we combine them. We just do a lot of math rearranging that's really tedious. And then we arrive at an inner product argument C equals inner product of A and B which is true where if that is true then V is in range with very high probability. So I'm actually going to be going more deeper into the math in another talk at Monero Village on Sunday. So if you want to learn more about the math then please come to that or you can just read our notes on the bullet proofs math. I'm just basically going to go through that and follow along at your own pace. So yeah, interestingly Monero does use bullet proofs and they do not use our implementation of bullet proofs although we're working on that. Cool, so now that we've understood the paper mostly we have to actually go and implement it and there's a lot of things where cryptography papers will say you need a prime order group or you need just to apply a Fiat-Chemere heuristic where it's not always very straightforward how as an implementer you do that. And so I'll talk a little bit about the tools that we use to implement these things. So starting with the prime order group. This picture is a little screenshot from the paper where the paper just says let's you denote a cyclic group of prime order P and as a reader you're like okay fine we use a prime order group but in practice how do you implement this? So most people use elliptic curves because they're just really efficient. Edwards curves are really nice because they're really fast have complete formulas which means they don't have weird edge cases that you have to deal with one by one and they're easy to implement in constant time. The problem though is that Edwards curves aren't prime order. Weierstrass curves such as SecP256k1 which is what Bitcoin uses are prime order but they don't have the nice performance and complete formula properties. So it'd be really awesome if we could just somehow get the best of both worlds. So in the past what people have done is that they said well we'll just use an Edwards curve in the place of a prime order group and it will be fine and it was not fine so that's not a good option. Don't do that cause then you'll have cofactor problems which many people have learned before. However we do have this really awesome protocol two protocols called the decaf and restretto which actually do let us get the best of both worlds. So decaf was created by Mike Hamburg in 2015 and allows you to basically get a cofactor four reduction from an Edwards curve. So that's really awesome like if you have an Edwards curve that has cofactor four which means that the order is four times a really large prime P then you can actually create a prime group P from it. And then restretto is actually something that was created by Mike Hamburg and Henry de Valence which does a cofactor eight reduction. And so the awesome thing about restretto is that curve 25519 which is an extremely fast Edwards curve has cofactor eight. So because of the existence of restretto we can now create a prime order group over curve 25519. So you can read more about decaf and restretto at these links here. And what's really awesome about curve 25519 is that it has extremely fast parallel formulas by this paper HWCD and it's implemented is a library curve 25519 Dalek which uses these parallel formulas and uses AVX2 to get really good performance. So basically now because we have restretto we can take advantage of all of these performance speed ups and make a prime order group over this curve 25519 curve. And the end result is that in fact we do have extremely good performance if you compare it to other prime order groups if you compare it to for instance sec P256K1 the Libsec P is four times slower than restretto 255. So that's really awesome. And I think if anyone's looking to build a prime order group restretto is sort of the obvious way to go. And it's implemented entirely in Rust so that's an extra bonus. So that sort of covers our requirements for a prime order group. Next we have to deal with the Fiat-Chemir heuristic. So if you've read through some interactive cryptography papers you often see this snippet on the right where the prover gives something to the verifier and the verifier gets a random scalar and then the verifier gives that random scalar back. And the paper says, well here we have an interactive protocol and you make it non-interactive by simply applying the Fiat-Chemir heuristic. And as an implementer you're like cool how do I actually do that? In practice a lot of people achieve like sort of implement the Fiat-Chemir heuristic by simply hashing the things that the prover sends to the verifier. So that may work with a protocol that's sort of this simple but if you have multiple rounds or multiple parties participating that can actually get really complex and like really complicated with some edge cases that you might have to deal with. So what happens for instance if you forget to feed data into the hash or the data is ambiguously encoded or if you have multi-round protocols and you have to somehow hold onto the randomness from a previous round. What about domain separators? And there's just a lot of little edge case scenarios that are hard to deal with and you might just get a little bit wrong which breaks your whole protocol. So it would be really nice if we didn't have to worry about this. If somehow you just had like a transcript object that held onto all the randomness for you and gave you some randomness back whenever you asked for it. So like this, if you changed your if you implemented the thing we saw in the first slide like so the prover gives the verifier L and R which in practice when you're making it non-interactive is that you just give the transcript L and R and then you get some challenge scalar back which in practice in code means you just get a challenge scalar from the transcript. So that would be really nice. And luckily for us, Henry Divlence also implemented Merlin which is a strobe-based transcript for zero-knowledge proofs. So this gives us everything we want. We have message framing, domain separation, composition of proofs and basically it makes our life super easy when it comes to dealing with the Fiat-Chemure heuristic. We sort of treat it as a black box and don't have to worry about what happens behind the scenes. And the best part about it is that more information is available at merlin.cool so you can learn more about it there. Awesome, so now we've sort of learned about the story how we decided to implement a zero-knowledge proof why we chose bullet proofs and what tools we use so you're probably itching to see the actual code. So let's dive into that. If you feel like following along the GitHub repo is here, it's open source. You can look into it. Dolly-cookie-slash-bullet-proofs, cool. So let's look at the range proof code. On the prover side, you have to actually assemble this proof. And I sort of gave you a little peek into the first few steps of making the proof which is to make this A sub L vector which is the bit vector that represents the V. So here on the left is an excerpt from the paper and this might look familiar. A sub L is a binary vector such that the inner product of A sub L and the vector two to the N equals V so that should look familiar from before. And then you just create A sub R which is A sub L minus one. So on the right hand side is basically some pseudo code that matches pretty closely with our actual code except that our actual code has a few more optimizations that make it a little bit different, a little harder to read. So I made it a little nicer here and you can follow along with the actual code at these links here. So on the left, what we do is we sample a random scalar A alpha and then we make a commitment, big A, using the blinding factor alpha and taking the multi-scaler multiplication of our G vector which is our sort of base point generators and A sub L and then our H vectors and A sub R. So effectively what big A does is it commits along with a blinding factor to A sub L and A sub R. And then on the right hand side, we can do this really simply using the Ristretto library and its curve and scalar operations to do H times alpha plus MSM which is multi-scaler multiplication of Gvec and A sub L and the multi-scaler multiplication of Hvec and A sub R. So hopefully this maps from the paper really well. And then we do the exact same thing but with S sub L and S sub R where S sub L and S sub R are actually blinding factors for A sub L and A sub R. So we generate those, we generate a blinding factor row and then we create a big S which is the commitment to all of those things. Cool and then now we have the actual exciting part which is that we get to do the Theatromir heuristic. So here in the paper it just says send A and S to the verifier and in practice what we can do in the code because of our transcript object thanks to Merlin is we can just commit the points A and S to the transcript and then where we're supposed to in the interactive protocol get Y and Z from the verifier we can now get the challenge scalar for Y and Z from the transcript and you can see here that we actually have these domain separators at every one of these steps and those domain separators make sure that we're actually getting A and S and not confusing it with any other steps. Cool so that's a little pseudo code and hopefully all of the rest of the library is also about as easy to read but I'm going to sort of talk about more high level how we are able to take advantage of using Rust to get really good performance and really nice code safety. So one really cool thing that we were able to do was implement session types for an MPC protocol using Rust. So for context I mentioned really briefly earlier that you can actually get really good aggregation properties from bullet proofs and what that means is if you have N people who all have separate range proofs for their own private secret values they could come together and without having to trust each other actually make one aggregated range proof that is significantly smaller and faster to verify than having N separate single range proofs. So in order to do this you have to engage in a multi-party computation protocol and it looks sort of like this you don't just see the actual party states but the state transitions look something like this. So basically every party will go through some calculations on their secret value and then make a commitment and they'll send this commitment either to a dealer which is what we have pictured here or they'll send it to all of the other parties if you don't trust a dealer and those other parties will take the commitments and then generate a random challenge scalar and return it to all of the other parties and they do this actually three separate times. So you can see how if we were just using hashing for the Fiat-Chemir heuristic this would actually be extremely complicated to do because you're like hashing data from a bunch of different parties over a bunch of different steps but with the transcript protocol this is actually really efficient but going back to my main point because this is a multi-party computation protocol and you're actually doing multiple party transitions it's really really important that you get these transitions in the right order. So for example if you were to call the same function on the same party state twice that would actually break the protocol and you might actually be able to leak secrets from the party by doing that because you're doing a different challenge over the same secrets. So you have to make sure that you can only transition from one party state to the next and not repeat a party state. And the way that we make sure of this using Rust is by taking advantage of their memory management model. So every party transition actually is a function on a party state where the function takes ownership of the party state and then consumes it and generates the next party state. So you're actually physically not able to call the same function on a state twice. So I thought that was a really cool thing that you could do is Rust and basically guarantee that you don't abuse this multi-party computation protocol. So that's one fun thing. Another thing that we can do in Rust is make optimizations based on the fact that Rust iterators are lazy and zero cost. So I mentioned earlier that you can do multi-scalar multiplications of a large generator vector and a large scalar vector. And that basically looks like this down here. Q equals some scalar times a generator, another scalar times a generator and you can chain these up. So if your iterators are lazy and zero cost, what you can do is any time that you have a multi-scalar multiplication, especially for the verification equation, instead of just doing that multi-scalar multiplication in place, what you can do is add all of the generators and add all of the scalars to increasingly large set of iterators and then only at the very, very end do you do one massive multi-scalar multiplication using all of the generators, so all of the points or generators and all of the scalars from the entire calculation and this is actually way more efficient both because you don't have to do extra allocations or manage temporaries for those iterators and because the back end we use with Stretto Library has speed ups such that if you have a lot of generators and a lot of scalars, it is able to take advantage of some performance properties. So the end result of all of this is that we have extremely good performance. So for 64-bit range proof, we're able to do verification in less than one millisecond and with IFMA we're actually three times faster than Libsec P and seven times faster than Monero. So basically the state-of-the-art range proof out there that we know of. So that's pretty awesome and this performance is a result of, as I mentioned, using Rust, using the iterators, using the Restretto Prime Order Group and a lot of other little performance tweaks that we did here and there. So I think this is really awesome. All right, so we have implemented this range proof. You're like, cool, now you can do confidential transactions on the blockchain. Now we can have private blockchain transactions, so that's pretty cool. But we didn't wanna stop there, we're sort of on a roll. And so what we did is we actually kept building this protocol out. So what I mentioned early in the talk is that bullet proofs actually allows not only for one specific proof type, the range proof, but it actually allows for the ability to make a proof over any set of arbitrary statements. And so we wanted to make a way for people to make their own statements. And the statement could be a range proof. It could be a proof that a loan has been repaid while keeping the loan amount and the percentage amount private. It could be basically whatever statement you want to make a proof over. And so we made this constraint system API for fully programmable proofs over the bullet proofs protocol. And what that looks like is you can add multiplicative constraints, so you have variables X, Y, and Z. And you want to prove that X times Y equals Z while keeping all three of those variables secret. And you can also add linear constraints, which is basically a combination of secret variables with clear text weights being equal to zero. So these are the two things that we added to the API to allow you to build constraint systems, which can represent your arbitrary statements. But you might be asking why constraint systems, like why do we want to use these to program your arbitrary statements? And that's because a constraint system can actually represent any efficiently verifiable program, anything that is NP complete for that matter. So as long as you can express something as a program, you can also express it as a constraint system. And then you can make a constraint system proof that all of the constraints are actually satisfied by your secret inputs. So that seems like a very complicated way of saying, basically you can code whatever proof you want using this API. So we made that API and we wrote up some more notes on that. And then we, the whole reason we wrote this API was because we wanted to use it, and one of the reasons that we wanted to use it was to build this confidential assets protocol. So the key difference between a confidential transaction protocol, which is what we talked about, what I talked about earlier where you just have, you know, your input and output amounts as secret values, is that in a confidential assets protocol, not only are your input and output amounts secret, but your asset types are also secret. So a confidential assets protocol is actually a lot more complicated to build than a confidential transactions protocol because you can't just add up all of the sums of all of the different asset types because that would be revealing what the asset types are. So there's a little bit more finesse that we have to do to build a confidential assets protocol. And there's sort of four building blocks of this, and each of those building blocks is actually a combination of constraints that went into the constraint system. So you might be familiar with the furthest right one, the range proof, that basically just takes all of the constraints that we used to build the range proof in the first place that I talked about earlier. But there's also a shuffle, a shuffle, we call them gadgets, a shuffle gadget, which proves that your inputs and outputs to the gadget are a valid reordering of each other. We also have a merge and split gadget, and those basically either merge two items into one or leave them unaltered. And from a verifier's perspective, you can't actually tell whether those things were merged or whether they were left alone. You only know that one of those two happened. So we use these building blocks to make a cloak transaction. What that means is that we have these inputs, we have these outputs, and we apply all of these gadgets to the inputs, such that if these gadgets were applied correctly, then the whole proof will verify correctly, and then you know that the inputs and outputs are actually valid reordering of each other. So here's, let's just walk through that just to give you an idea what that means. So here your inputs are $5, three yen, and $4. First you want to group the assets together, the asset types together. So here you're shuffling them, and you want to group the $5 and $4 together, and the three yen at the very end. As a reminder, the verifier doesn't know exactly how you're ordering this. All the verifier can see is that you're doing a valid shuffle or that you're not doing a valid shuffle. So here this is a valid shuffle, and then we want to merge all of the things that are of the same asset type together. So $5 and $4 added to $9. Then we shuffle again, and we move all of the non-zero types to the top of each asset group. So $9 gets moved to the top, while $0 gets moved to the bottom of the asset group. Then we split the asset types into whatever output amounts we want. So if we look to the right, we actually want an output amount of $6 and $3. So we split $9 into $6 and $3, and we keep three yen the same. And then lastly we do one shuffle just to make sure that we have a random ordering for the output, and then we add a range check to make sure that none of these values are negative. And then basically as long as you make a proof that all of these operations were valid operations that satisfy the constraints in each of these gadgets, then the verifier can look at this proof and without knowing what any of the inputs, any of the outputs, or any of the intermediate values are, they can actually say yes indeed. I am fairly sure that this is a valid confidential assets transaction. You did not in fact create extra assets, destroy assets, or anything like that. And to a verifier, transactions of the same size, meaning the same number of inputs and the same number of outputs are indistinguishable from each other. So that's the whole point. We want to make sure that we can keep these credentials, these transactions actually confidential. So the spec and code for this is online, and this is something that has actually been inherited by Stellar, the Stellar Development Foundation, and they're working on integrating this into Stellar. So that's pretty exciting. We now have the ability to do a confidential assets transaction with bullet proofs without using trusted setup, which is something that is fairly interesting. And then if you're still with me, the last crazy thing was that we wanted to make a smart contract language where people who don't understand how constraints work, don't really care about a multiplicative constraint or a linear constraint, but want to write smart contracts are able to use this smart contract language to write contracts that compile down to zero knowledge proofs. So I'm near the end of my talk time and I want to leave time for questions, but basically this was a whole separate talk that we gave at a conference recently about how we were able to make a zero knowledge smart contract language using bullet proofs that's a lot more user friendly than building constraints by hand. Cool, so that's my talk. Once again, a lot of this stuff I worked on with Henry and Oleg at Chain and Interstellar, they're both super great, and Oleg is actually carrying on a lot of this work at the Stellar Development Foundation. And I also want to say thanks to George and Deirdre who persuaded me to submit this talk and George actually gave a very similar talk to this called implementing an elliptic curve, how to implement curve 255.19 in go at this exact village a few years ago. So I modeled a lot of my talk off of his talk. So that's my thanks. For further reading, we have, of course you should read the bullet proofs paper, it's still a really good paper even though it doesn't explain the intermediate steps they take to get to their conclusion. You can check out our open source GitHub repo for implementation and then you can follow along with our math here where we write up how you actually get from step one to step two. And then lastly, if you want to refer to any of these slides, my slide deck is just at speakerdeck.com slash Kathy Yun so that you can find all of these URLs in the future. And I am on Twitter at Kathy Yun if you wanna shoot me any questions or just find me in person later. That is my talk. And questions at the mic in the center aisle if you have questions, we do have a cutoff and so please walk up to the mic and if there are any mobility issues I'll always walk the mic over. Hi, thank you. I'm wondering if these validations, are these compile time or run time? So you're talking about the bullet proofs, verification of proofs? Yeah, well the components that you were showing, oh it's able to verify this component actually does a valid shuffle or a valid merge, these kinds of things. Cause it's really interesting, it's like you've kind of implemented a really cool dependent types kind of thing or validation and with Rust being a nice strongly typed language that's kind of neat, like a higher, bringing the types maybe into a higher level of being able to cert certain things. Yeah, so you're talking about basically for these gadgets whether these are compile time. So basically whenever we build one of these gadgets what we do is actually we use the constraint system API to hand select like here we want one more multiplier one more linear constraint and so that's all done basically at compile time. And yeah, does that answer your question? So we basically like do this by hand for Cloak but then in ZKVM you can actually do this with like certain op codes and that gets sort of like compiled down to these multiplicative and linear constraints and those then form a proof. Yeah, I think it's great because also like lots of these blockchain languages like whatever they seem super untyped and uncheckable and so it's nice to see that we're using these modern developments to do this kind of work. Cool. Yeah, thanks. And if you want to learn more about our smart contract language like philosophy actually we built ZKVM which is like the zero knowledge virtual machine based a lot on our design for TXVM which is our original smart contract language that was in zero knowledge and that was a transaction virtual machine and that like had in mind a lot of the things that you mentioned about like strongly typed not being able to shoot yourself in the foot by accidentally like dropping a contract things like that having deterministic output for instance. So we had a lot of lessons learned from developing TXVM. Anybody else? All right, thank you very much. Great. Thank you. Thank you.