 This, I will try to explain to you how it is possible to verify the correct execution of a computation without re-executing. Of course, just so a computation is in this sense a fixed procedure that runs on a certain input and results in a certain output and the computation itself is deterministic so whenever you run it on the same input you always get the same output and of course the obvious way to do that is to just run the program again and that is what is currently done in blockchains. If you sync a blockchain you just rerun all the computation that the miners have already done. I mean you don't rerun the search that is done in mining but you rerun executing the transactions and on the horizon there are new research results which might allow you to do some things which then result in checking a computation and being reasonably sure that it's correct faster than re-executing them. Okay and as an outline of this talk I will start with explaining why that might be useful for blockchains already started a little on that and then there are yeah basically two popular approaches to that problem one of them is called snark and the other one is called stark and they share some properties and they then in some other regards they are fundamentally different and since there were already talks about snarks I will focus on Starks for the rest of the talk and yeah I'll try to explain how they work and the first component of Starks are running the computation and generating something that's called a computation trace and this is then encoded into a polynomial and the second component is something called IOPP interactive oracle proof of proximity first two already bored can understand that and this is used for low-degree testing and then in the end we'll see how you can add zero knowledge to the whole thing I am in no way responsible for any of the results explained here and this is a long yeah line of research that started in the 90s and the names here are just some of the contributors and I already warn you that I will leave out some details quite a lot actually but I hope it's still you can still see the the main breakthrough ideas there and I yeah I already have warned you that it might be a little dry so if you have any questions in between please ask to to yeah make it a bit less dry okay where I find blockchain when you look at a blockchain it always starts with a block with a number zero which contains the Genesis state which might be completely empty or contain some pre-allocated funds and then you add a new block and this block number one then contains a list of transactions and sometimes also something called a post-state route so this is you take the the state of the blockchain so kind of the the account balances and create a mercury of it and in in the block or in the block header you store the root hash of that mercury and now assuming we have this technology and it is practical this stark technology this verifying computations without re-executing them or faster than re-executing them then we might be able to attach a proof to that block which can convince a verifier that the transition from the Genesis state to the state in block one by the given transactions was done correctly so that this post-state route is the post-state route you actually get when you run the transactions and but we assume that it's possible to verify this proof without actually re-executing the transactions without actually finding out the effect of the transactions and so if this block comes in and you want to verify it you just run the proof you don't re-execute the transactions and this is a little boring but it gets more exciting when we add block number two this block number two looks similar it also has transactions and it also has a post-state route it also has a proof but now the proof does not only prove that the transactions took the state from the state in block number one to the state in block number two but it also proves that the verification of the proof in block number one was done correctly okay so this proof does not prove that the transition from Genesis state to block number one was correct it only proves that the proof in block number one was verified correctly and that's a big big difference because especially as the blockchain grows the amount of work that is required there stays the same so we only verify the transition from block two to block three and the checking of the proof in block two we don't verify the checking of the proof in block number one and the transition from 0 to 1 so what we only do is we have a proof that the state transition from the previous block to the current was correct and the previous proof is correct and this means that syncing a blockchain if we have the technology then syncing a blockchain just means downloading the state and verifying the proof in the highest block that's it that's all you need to do so you don't need to rerun all the previous transactions so yeah and to warn you so this does not yet work in practice and it does not solve the problem of data availability and it does not solve yeah you only know that the block that you have is correct you don't know whether it's the highest or the one with the with the most proof of work so that that's similar to data availability so somewhere out there so you might be disconnected from the network and the rest of the network might see a much longer chain and you're on the wrong block that's something you can't do here you can only see that the block you have is correct and the whole history is correct okay so that's what might be possible in the future and now how does how do these yeah computation verifiers work almost always those start with something that is called an interactive protocol in an active protocol you always have two parties one party or yeah you might also have more parties but most of the time you have two parties one of them is called a prover and the other one is called a verifier the prover is usually much more powerful than the verifier so think of that prover is someone who creates a transaction and the verifier is just a smart contract on the blockchain so the prover has a tremendous amount of memory tremendous amount of disk space access to network and the verifier is extremely limited and they exchange messages prover sends a message to the verifier verifier reads that message answers with her own message prover reads that message again answers with her own message and so on and the verifier is allowed to use remnants and so on correct input so think of it as for a fixed computation input and output pair and this this is correct if the computation on that input yields that output if that's the case then the verifier has to accept after the exchange messages and on incorrect input if the computation was wrong then the verifier has to reject with high probabilities a verifier uses randomness so we have to say with with high probability here and not with the certainty okay and you might already see that this is not really suitable for blockchains because I can't really exchange messages with blockchains I mean I can but it would be a hassle because it takes multiple blocks and so on and because of that interactive protocols can often be made non-interactive by a technique called or named after Fiat and Shamir we might take a look at that later okay and now let's compare snarks and Starks so just name the the acronym snarks is succinct non-interactive arguments of knowledge and Starks is succinct transparent arguments of knowledge although Starks are also non-interactive most of the time but it doesn't really matter what these acronyms mean the thing is that the proof size so proof size means the length of the exchange messages and usually when they are non-interactive this means the prove it just sends one single message to the verifier the verifier processes this message and says I accept or I reject and since it's just a single message it's also called proof for snarks they are very short so something like 188 bytes mostly regardless of the size of the computation and for Starks they are longer currently in the range of 400 kilobytes something like that Snarks are not run once this means Snarks require a quite costly setup phase but this setup phase has only has to be done once for one type of computation and then it's extremely fast so you have to invest it in the setup and then you can repeat it multiple times and for Starks that's not the case there's no setup so it's in generates a little slower but you can do it multiple times on different functions transparent that's quite important that means that for Snarks you need random numbers and you need to keep them secret so in this so-called trusted setup setup phase that you just do once in the beginning random numbers are generated and they have to be kept secret or you have to pick it secret kept secret at all cost but they are not lead needed any more later so you can also destroy them that's why you might have seen videos of people drilling holes in their computers and things like that for Starks that's not the case anything that happens in Starks is public so of course Starks can also have zero knowledge but so there are private parts but you don't need this trusted setup you don't need the the toxic ways the random numbers that you need to destroy and Snarks rely on some assumptions about cryptography for example that elliptic curves are secure and also something like knowledge of exponent which is in practice not true anymore once we have scalable yeah scalable actual quantum computers so because quantum computers can break elliptic curve cryptography for Starks that's also better because they only rely on the existence of collision resistant hash functions and I think this is something that is reasonably true for yeah quite a long time okay both technologies here and many others start with encoding computations as statements about the equality of polynomials and this is so encoding computations as polynomials is quite useful because you can easily check whether two polynomials are equal or not but evaluating them on some points and the reason is that if two polynomials are not equal then they are different at almost all points so in the rest of the talk we will always talk about polynomials over finite fields and this means you can actually have numbers there so they are two different polynomials are only equal at a number of points that is equal to their degree and different all other points of the fields and so the problem is that we want to avoid the proof we're having to send the full polynomial to the verify because the polynomial is usually rather large and we want the messages to keep small because these messages are stuff then it's then to be encoded into the blockchain so the verifyer wants to check that two polynomials the prover claims something about are equal and has to evaluate them for that but she doesn't know the polynomial so she kind of has to ask the prover to evaluate the polynomial for her and then check that the numbers are equal but she has to kind of trust the prover that this evaluation was done correctly or the prover has to convince her that she did it correctly and Snarks and Stark take two different approaches there Snarks use partially homomorphic encryption for that and it works by the prover evaluating the polynomial at an unknown point or an encrypted point and this encrypted point is the thing that is generated during this trusted setup so some randomness that is later destroyed is used to create a random but later secret point where the polynomials will be evaluated and since so this is homomorphic encryption so if the if the value at the encrypted point which is the encryption of the value at the point so if those are different then also the decrypted versions are different and because of that you can throw away the randomness so you don't need the decrypted point anymore you can just work with the encrypted point okay and Starks are don't use cryptography they just use hash functions not sure if you can't test cryptography and here the rough idea is that the prover commits to the full evaluation of the polynomial so they take the polynomial evaluated at all points in the field or at least a large number of points and then create a mockery of these function values and send the root hash to the verify it just the root hash and then the verifier can request the prover to send some values and these values then of course come with mercury proofs and the only flaw is that as I said earlier the verify has no control about how the prover actually evaluates this function and because of that we need a second component where the prover proves the verifier that this gigantic array of values she has is actually the value set of a polynomial and this is done in something called IOPP interactive oracle proof of proximity okay so let's get into detail what is a computation as I said it starts with it starts with an input so let us simplify computations here rather strongly and let's assume we have a computer that only has three registers these registers are a 1 a 2 and a 3 the input at the beginning of computation is present in these registers at step 0 this is 1 4 and 2 and then we have a fixed program that does something on these registers so for example at step 1 we take the value of a 2 and a 3 add them and store it in a 3 and then at step 2 we take the value at register 2 register 3 add them and store that in register 1 at step 3 we take the value in register 1 and register 2 multiply that and store it in register 3 and so on we do that for 8,000 steps of course the program is not a list of 8,000 instructions but it's somehow more compressed so it has loops and things like that so the description of the program is much shorter but at every step of the computation you kind of know which operation to perform on which registers okay and yes so we have an input of 1 4 2 and the prover claims that the output is 15 3 34 and of course the computation is correct if it is correct at every single step and the idea now is that verifying a single step is easy in comparison to running the whole thing and now yeah we wanted to encode that in polynomial so what the prover does is she takes this sequence of values for a 1 and turns that into a function so the function a 1 at point 0 is 1 a 1 at point 1 is 1 a 1 at point 2 is 10 a 1 and point 3 is 40 and so on these are 8,001 points and because of that you can create a degree 8,000 polynomial that behaves exactly as that function at these points so the polynomial at 0 is 1 the polynomial at 1 is 1 and so on degree 8,000 might sign might sound a lot here but yeah that's how it works and yeah I mean in the paper they actually go to degree I think 10 million or something like that I mean the number of steps of a computation yeah and we do that with all three registers and now there is another polynomial called C and this kind of encodes the the program so and it can be used to check the correctness of a single step in the following way okay so this is the first complicated formula I'm sorry about that so the computation is correct if and only if the inputs and the outputs are correct so that's something we have to check in addition and I mean output is correct as here does not mean that output and all the preceding stuff is correct but just the output and in addition to that for all x in between 0 and 8,000 so these steps we run C or we evaluate C the which is the step correctness checker and as inputs it gets x the step and then a 1 of x a 2 of x a 3 of x the values of the registers at the beginning of the step and the values of the registers after the step so that is all you need if you know the program that is all you need to verify the correctness of a single step and it has to be zero and yeah we assume that this can be encoded as a polynomial since the program is finite this is always the case clear so far any questions who's already asleep good there was graphics here okay that's there's a table it has actual numbers okay let's continue so that's again repetition of what we had in the bottom the condition for computation being correct now something happens that's a simplification but you might not think of it as a simplification simplification let's see so we have these three registers and what we do so we have three registers and one polynomial per register and we combine these three polynomials into a single polynomial by just kind of stretching it so we call the single polynomial a and so perhaps it's better to explain it on the table again we build it such that a at 0 is this one here this is a at 1 a at 3 a at 4 a at 5 a at 6 a at 7 and so on so it's basically just encoding the whole table into a single sequence of values and yeah so and we need these lowercase a's to perform the mapping between the indices here so and if we look at that and replace all the a i's by the single a we get this here so c of x then a at the first index a at the second index a at the third index and then a at the first index at x plus one second index at x plus one and third index at x plus one okay question I have a question when you execute the program and you have different values for the interest then you every time have a different polynomial right so yeah so this is stuff the crew has to do of course that has to be repeated for every single execution yes okay but once you execute the whole program and you interpolated the polynomial once you have polynomial so every if you want this basically what you want to do it so you don't have to run the whole program again exactly so this is so yeah this is stuff the proof has to do but that stuff the verify does not have to do right so and the idea is verifying the computation is faster than re-executing it but generating the proof of course takes longer than just re-executing it or just executing it did I answer the question a little or not yet okay I will I will have an overview of the whole protocol later perhaps it gets clear then okay and now so what the statement here that's a statement about a polynomial on the left-hand side and a number on the right-hand side and it's a statement at 8,000 discrete points of this polynomial 8,000 discrete input points of the polynomial and what we now want to do to get it fully algebraic is to turn this into a statement not about discrete points but a statement about two polynomials and statement about them being identical and so yeah I'll just okay what we do is so we say that the polynomial on the left-hand side here is zero at these points and this means on the right-hand side if you want to have a polynomial here at the right hand side we have to find a polynomial that is zero at these points and that is this polynomial here so the polynomial z is zero at exactly the point zero one two until seven nine nine nine and nowhere else and if you just put z here then this will not capture the full polynomial here so we need to add another polynomial factor or multiply a different explanation is we take this polynomial and it has some zeros we know that it has zeros here so if we have a field that is algebraically close like the complex number or a proper finite field then we can factor out these factors x minus zero and if we do that then these factors we removed are z and the rest is d yes so the degree of c is higher than 8000 because I mean if you take a look at it the degree of a1 is exactly 8000 and that's a argument of c so and has more stuff so it will probably be higher okay the important thing here is that so we have the same left-hand side here and replace the zero by a product of two polynomials and now we can say that this equality is not only hold for all x in this discrete set but for all x in the whole field so we have actual polynomial identity everywhere and because of that we can apply the stuff we talked about earlier checking equality of two polynomials by evaluating them at some points and checking that the value is the same yeah right so all polynomials here have small degree and small here means small in comparison to the size of the field so size of the field is something like two to the 80 and the degree of the polynomials is something like between 8000 and 10 million or something like that and this will be important later okay now we've come until here and I see still see some open eyes that's great because now I will describe the full interactive protocol that is run now between proven verifier and as a reminder this is the property you want to show at the end okay given so this means this is a shared input that is available to the prover and the verifier C is available to go to both because this is kind of encoded in the program that's the the function the program they want to verify then these a 1 a 2 a 3 these are simple functions 3x plus 2 or something like that and then Z that's the polynomial that has zeros exactly at the point 0 1 2 and so on this is a polynomial of rather high degree especially for the verifier but it has a very simple structure okay the prover now computes the polynomials and these are gigantic beasts and we do not want to send them to the verifier she computes a by basically running the computation and looking at the values of the registers running polynomial interpolation and yeah that's how she gets a and then D is obtained by evaluating this expression here and dividing it by Z using polynomial division or yeah something like first fast Fourier transform there are quite efficient algorithms for that and now the prover actually wants to send the full A and the full D to the verifier but that is way too long so we do something that is almost like sending the full thing to the verifier and it's that this is creating a mercury of all these eight thousand or yeah of all these values and sending the mercury route to the verifier I actually we have to evaluate a and D not only on these 8000 points but actually on way more and the reason for that is if we compare polynomials by evaluating them at some points and these these points can only be points can only be from this set of 8000 points then they will always match so we have to evaluate the polynomials on way more points okay always match why if you only evaluate these functions at 8000 points and assume that they are degree 8000 then to two polynomials of degree 8000 that are different can still be equal at 8000 points because of that we need way more than 8000 points to get the right probability so if the verifier is unlucky she might hit one of these points where the different polynomials are actually equal and but if there are one million points to choose from and only 8000 are unlucky then the probability is rather low so we probably want to wrap that up to 10 million okay that's a very large mercury but it still seems to work so yeah what the verifier now does she requests so she picks a random x prime from these one million points and requests the values of the polynomials at these points there's some weird notation here and the reason is that the verifier requests the requests a and d to be evaluated but if you look at this formula here then it the x prime doesn't directly go into a it's first transformed why are these lower case a functions and this means we have to request we have to first transform this x prime by a one that's actually wrong this minus one shouldn't be there right okay and then the next step is the proof of course provides these values again don't remove this minus one here and she provides these values together with the Merkel proof inside the Merkel tree where the root of which she sent earlier and the verify of course checks the the Merkel proofs and also checks that yeah basically this equation adds up when you in when you insert the values that the approver just provided in this model we just assumed that I mean this is still interactive so it is the model is much more powerful it's interactive and the verify has randomness yeah I don't have a slide for that because of reasons so I can talk about that now the way you remove both interaction randomness is similar and the reason is so if you take a close look then everything the verifier sends to the prover is actually just a random number so it's it's just I mean the the verifier could also just send this x prime and then check that the prover computed these things correctly so and it is also important that this random number is public so the the ver the prover can have access to the random number because the prover pre-committed to the root of the mercury so and if the prover first commits the root of the mercury and then gets a randomness then she cannot modify the mercury anymore and the way you remove both interaction randomness now is you so if it's not interactive then this means the prover just generates a single string a single message and then this is verified once on the blockchain for example and this means this message is basically everything that is sent to the prover as to the verifier so the the roots of the mercury's and then to get randomness you basically just evaluate a hash function on these two roots and take the randomness from there because this is the same thing so if the prover first sends the mercury roots and then receives the randomness she cannot modify the mercury any more and if the if the randomness just depends on the mercury then any modification the mercury will also change the randomness but in the beginning you said you just expect when you talk about the assumptions you just expect the collision resistance and now all of a sudden you expect the random orbital access from the hash function, right? Is this much more to ask from the hash function? You know at the beginning? Yeah, I know what you're talking about might be that I confused some terms here so but it's is it enough if the hash function is uniformly distributed? Yeah it might be that you need to assume stronger hash functions but usually this is what should suffice and just there was another question but let me just finish now we have this random number and then the the prover can just use that random number and continue and provide that stuff and then perform this check and now the verifier which and then this proof is complete we send it to the blockchain and the blockchain just reruns the steps of the verifier and that's how we remove both interaction questions it doesn't matter and it's actually not I mean these are finite fields so there's no really a concept of inside or outside or smaller and larger I mean that is but there is no danger of polynomials going away on the yeah it's a finite field so you don't get overflow problems or something like that so you don't get problem precision that's what you're asking me no I'm kind of asking since you're evaluating a polynomial and the value of the polynomial and I kind of is it possible for the polynomial for the interpolation function of the polynomial to go away to infinity on the edges of the board of the interval that you compute. No because the field is finite so the values of the polynomials polynomials are always elements of a finite field to be a mathematical thing without any means of variability or I can answer it as you said it's a finite field yeah another thing of using hash factor polynomials of course is that this thing gets put into this which is also important for various fields. Okay what is missing from here? Actually said it on the first slide yeah that we can stop it on top yes so we just said that the polynomial is approved that the polynomial is a small size so small three I guess. And why is that important? otherwise the the prover can just I mean the prover computes this a and creates a merkle tree of the values and these values can just be anything they don't have to be so just just from the merkle root you don't really see whether the values are the values of a polynomial and if they are not the values of a polynomial we lose this property that two different functions are different at almost all points. Okay so yeah that's the question the IOPP will solve that and yeah as I already said the prover can cheat unless A and D are polynomials of low degree and low degree in this case is 8000 and the interactive oracle proof of proximity will help us I will actually give a very simplified presentation of IOPP especially we will just ignore the proximity part in our simplified version we have a merkle tree of the values of a function and we want to prove that this function is a polynomial of degree at most D. We actually don't care which polynomial and that's also important that the prover doesn't have to prove that this is that this A is a certain polynomial she just needs to prove that is some polynomial and if we do it properly then we add if we add this proximity thing then we actually show that it's either a polynomial of degree at most D or it's far away from all polynomials of degree D and far away here means it differs at many points at many values but that's a little bit more complicated so we'll do the simpler thing and the main idea so we will also this will also be an interactive protocol because inside an interactive protocol we can invoke another interactive protocol as a subroutine and the main idea is divide and conquer here so we what we do is so we assume of course that F is a polynomial so it has it's a sum of coefficients times x to the power of k and we take these powers of k and regroup them into odd and even ones in the following way so we have our polynomial f of x and we state that it's equal to g of x square these are the even coefficients plus x time h of x square these are the odd coefficients and then g and h are again polynomials but the degree is d half now it's d half because the input is already x squared it's clear that that is possible so we just move all even coefficients to the left group them and interpret them as expressions in x square instead of x and move all odd to the right factor out one x and also interpret interpret them as coefficients in x square and what is also important that the domain size so that the domain is always a finite field and this is also half because we only have squares here okay and now what we do is again the prover commits to a mercury of all values of g and h and the verifier requests a random check that this equality actually holds we already have the proof already committed to all values of x she does the same for g and h and then we just evaluate this equality at a single point at a single random point and so we know that this actually holds with high probability and now we do the same thing recursively for g and h but now the degree is only d half and this recursion will terminate at some point where the degree is one at which point where the and at that point where it's zero and that point the verify I can just check that by looking at it exactly so the only randomness the verify request is just so the randomness doesn't depend on any previous messages that's also important it's just a random in a certain range any random number it's achieved in the same way exactly okay we're almost through so the exciting part the next talk it's not far away how do we add zero knowledge that is also simplified again because we ignore the fact that this this proximity thing what is your knowledge they have been so you can have a different talk about the whole concept of their knowledge a very simple explanation is that in such an interactive project all the prover convinces the verify about the fact that the computation is true without revealing anything else about the computation so in our specific example the verifier never learns these intermediate values in the computation of course the verifiers yeah it's a bit more since the verifier can actually compute these intermediate values this is not really true here so the actual concept is a bit more complex because you can add hidden inputs but yeah let's take a look at Sudoku for example there's an interactive protocol for Sudoku where I can prove to you that I know that this Sudoku is solvable without you actually learn learning the actual solution I can't do it now but because I don't know the solution but there is a protocol that does that so at the end you're convinced that it is solvable without knowing how to solve it okay and how does it work with Starks or actually so I we will only look at the zero-knowledge aspect of this IOPP and all of the Starks the full-stark zero-knowledge so again the task is to show that these two polynomials A and D are arbitrary low-degree polynomials I said that before it doesn't we don't want to know we don't want to convince the verify that they are have a specific they are specific polynomials it just suffices to show that they are arbitrary polynomials of a certain maximum degree and so if we say that P is the set of all polynomials of degree at most D then the following is true for any polynomial U in P so for any polynomial of degree at most D and any function F F is in P if and only if F plus U is in P the reason is so if F is a polynomial of degree at most D then adding a polynomial of degree at most D will not yield a polynomial with higher degree and if F plus U has degree at most D then you can subtract U and this will yield F and of course a polynomial with degree at most D minus a polynomial with degree at most D yields a polynomial with degree at most D good and we use that now to show how we can get an IOPP with zero-knowledge and yeah we will almost show that there's some tiny point that doesn't really work out the proof it chooses so we want to show that a is a polynomial with degree at most D and but what we do is instead the proof it uses randomness here and chooses a random polynomial with a degree at most D and adds it to a and then creates the mercury of values for a plus U and then both run the IOPP for a prime and the thing is now since U was chosen uniformly at random from P this a prime is actually just a random polynomial of degree at most D and the the verify I cannot reduce anything about a from that yeah so clear more or less and yeah it works because of this lemma here all right again something I forgot of course the proof I still has to convince the verifier that this equality here holds and this is the kind of small hole here in the zero-knowledge thing because for that we again have to evaluate them at random point and because of that the verifier learns at least a single point of a actually the verify also learns the root hash of the values of a but in the full star construction this so the reason why this doesn't matter in the full star construction is that it has a property that if you learn up to T bits of the proof for a certain small constant T you learn nothing about the the actual witness and this is because of that you can it actually is fine to evaluate this at one point this will not yield anything about the actual thing the proof is about okay I don't have any more slides are there questions So you bring up this part, you argue that there are less talked about than the snarx. But if you also do less talked about than the snarx, they give less attention than you make the angle. But do you think that they are more applicable to the theory than the snarx? Actually, it's the opposite. So snarx are already possible in a theorem because we have these precompiles. And also because the proofs are only 188 bytes, which fits easily into a transaction. And for snarx, we have proofs of size, I don't know, 500 kilobytes, which is way too large for a transaction. So I could think of a specialist blockchain that only has snarx, where 500 kilobytes is not that large, because we have one megabyte blocks in Bitcoin. But I don't think it's practical for Ethereum at the current point in time. Are there benefits to snarx? Yeah, the benefits of snarx are no trusted setup. They are quantum resistant. What else did I say? Yeah, quantum resistant, transparent. That means no trusted setup. That's about it, yeah. And also, they don't make any unproven assumptions. Crypto assumptions does not only mean not quantum resistant, but snarx assume this knowledge of exponent thingy, which might just turn out to be not true. It's a setup for the one-on-one's property and for the one-on-one's transaction setup. The question was, were the setup for the one-on-one's thing and the transparent is the same? Yes, it is the same. Yeah, yeah. So in snarx, there's an initial setup that generates a certain amount of data. And that data is then reused for every single transaction to verify every single transaction. Yep. Can you tell me your thoughts about the one-on-one? But they need a trusted setup. I know at least one-on-one scheme, but that's the one-on-one's properties. And everything else is the share-charm here. So it's going to be longer because you make many challenges, right? And do you think it's possible to have both to ever have a constant scheme that doesn't have a trusted setup? You're saying snarx are mainly longer because they have interaction and because of Fiat-Charmier and the Merkle proofs, the proofs are longer? Yeah. I mean, Eli Ben-Sasson is confident that that can be considerably improved. I don't have that much insight, so... I hope. Is there a combined arrangement of snarx and Ilemon? There are implementations on GitHub for the whole thing. I'm not sure what the actual input is, but they... It's not as... So snarx, for snarx, usually have to create arithmetic circuits and then compile them. And for snarx, they start with register machines. So they actually use computation traces. And the paper says that it is better to run on register machines than create arithmetic circuits because when you start with arithmetic circuits, you end up with polynomials of degree two. And this is important for snarx because they can only handle polynomials of degree two, but snarx can handle polynomials with higher degree. And if you start with computation traces, you get polynomials of degree eight. And because of that, it's more efficient than going through circuits. Yep. Just a little bit of curiosity. I heard that quantum computers are a little bit far away. I mean, I think they can be 50 qubits now in that universities. But once you decide for one of them, say, for snarx, and we get a quantum computer, then you change the snarx because then you're holding the computer and it breaks down, right? I'm not really up to date, but I doubt that they can handle 50 qubits. Right, so there might be, so there's, so in popular articles, they usually say quantum computers part, but actually there are two vastly different types of quantum computers. And one of them is adiabatic quantum computers. And they cannot break discrete logarithm. They can just do some optimization problems probably quite fast, but actually there's not a lot of theory about that. And I think those have 50 qubits, and the others I think might have 10 or something like that. And so the number of qubits you need is basically the key size. If you have a full quantum computer with a key size up to your private key, then you can recover that just from a public key. Whether you can switch from snarx to snarx, I mean, they are fundamentally different things, but if you build your system flexible enough, then of course you can switch. I mean, yeah. And I mean, if you start with a system like Ethereum, then you can have smart contracts and I can have both at the same time. If either the EVM is performant enough to implement starks or starks are improved as much as such that they can be run on the EVM. Okay, then sorry for boring you for an hour and thanks for your attention.