 This presentation is about a paper entitled Transparent Snarks from Dark Compilers which is joint work between Benedict Boons, Ben Fish and myself, Alan Schopenetz. The best way to illustrate this work is to situate it exactly in the landscape of general purpose your knowledge proves. In this landscape you have proofs with succinct verification. These are what you typically associate with snarks where the verifier runs a lot faster than naively evaluating the circuit or running the program would take. On the other hand, in this landscape you have proofs with transparent setup where you don't have to trust any person, you just have to trust in the hardness of mathematically hard problems. The intersection of these two properties was until recently populated only by a couple of theoretical results. This changed with the advent of the Stark proof system and with proof systems that rely on similar mechanics which in this case boils down to the Frye low degree test. This intersection is also where our contribution lives, which we call darks. It stands for the authentic arguments of knowledge. Another picture in which our work can be situated is that of the compilation pipeline for snarks where you start with a computation at the start and transform it in various stages until you get a concrete snark for that computation at the end. In recent years, focus by the community has been on the cryptographic compilation phase and in particular by instantiating it with a polynomial commitment scheme as the only cryptographic tool involved. Since it's the only cryptographic tool involved, all trust assumptions are restricted to the polynomial commitment scheme and in particular this automatically gives you a universal snark. If you need to trust the polynomial commitment scheme, you only need to trust the setup once and then you can reuse the public parameters to generate snarks for any circuit and that's the definition of universality. On the other hand, if you don't have to trust the polynomial commitment scheme at all, then you have a transparent polynomial commitment scheme and this property carries over to the concrete snark as well. Let's focus on the bottom end of this compilation pipeline because this allows us to illustrate where our contributions are situated. For starters, we propose a new polynomial commitment scheme based on groups of unknown order. We propose an information theoretic formalism for abstract protocols that are amenable to instantiation with a polynomial commitment scheme. We call this abstract protocol a polynomial IOP and we propose a compiler to transform polynomial IOPs using our dark polynomial commitment scheme into concrete snarks. When you read the paper, you may notice that we ask and answer questions related to these points and also provide a couple of extended evaluation protocols. But in the interest of time, my presentation will focus only on the colored blocks, starting with a polynomial commitment scheme based on groups of unknown order. In a polynomial commitment scheme, the prover possesses a polynomial consisting of lots of coefficients. I think a million coefficients or more and obviously he can't send all those coefficients to the verifier because that would make the proof large and the verifier slow if the verifier has to read all of those coefficients. So instead of sending all of those coefficients, he will send a short commitment and it's a special kind of commitment because it allows the prover at a later point in time to prove that the polynomial evaluates to a given value y in the point z. We require this commitment to be binding so after sending the commitment the prover cannot change which polynomial it commits to and we require the proof to be extractable. Meaning that there is an extractor machine that can engage with the prover to extract this polynomial by running the prover multiple times on the same randomness and so on. So what does the evaluation protocol look like? We're going to build this evaluation protocol and by the way since the verifier is sending only public coins we can turn this protocol into a non-interactive proof using the Fiat-Chemere heuristic. We're going to build this protocol from a special homomorphic commitment scheme tailored to polynomials that's itself simpler than a polynomial commitment scheme. So the prover possesses a bunch of coefficients and the verifier possesses a short commitment to this polynomial in addition to a point z and the evaluation point y. In the first step the prover splits this polynomial into two parts the left half and the right half and he commits to both halves independently. In addition to that he evaluates each half in z thereby giving y left and y right. The verifier needs to be able to check that this concatenation matches. This is a special check performed on the commitments but it pertains to the concatenation of the polynomial coefficient lists. If the concatenation check matches then he produces a random scalar alpha which he sends to the prover and the prover uses this random scalar to combine the two halves into a new polynomial. And on the verifier side this scalar is used to combine the two evaluation points into a new evaluation point and the two commitments into a new commitment. And if you look carefully what's claimed about the blue polynomial the commitment and the value is mirrored exactly by what's claimed about the brown polynomial the commitment and its value in z. Except the difference is that the number of coefficients has been decreased by a factor of two. This screams recursion which you can indeed apply and after a logarithmic number of rounds you end up with a constant polynomial which you can send over to the verifier in the clear and you can check the requisite relations there. Of course this commitment scheme does need to provide the verifier with certain capabilities. Firstly the commitment needs to be able to commit to elements from an infinite set. We're dealing with arbitrary polynomials but we want to send only short messages to the verifier. Secondly the verifier needs to be able to compute linear relations on commitments that carry over to the polynomials they commit to. And thirdly the verifier needs to be able to verify this concatenation of coefficient lists which boils down to multiplying the red polynomial by a monomial and then adding the two to check it against the blue polynomial. So this is in essence verifying a monomial relation and how do we do that? The answer we came up with is based on the groups of unknown order. We know two such groups and it really doesn't matter too much how they work because you can use groups abstractly. In the RSA group you need to sample two large prime numbers and multiply them and then forget those large prime numbers. And unfortunately you do have to trust whoever produced those prime numbers to forget them and that's why the RSA group comes with a trusted setup. On the other hand in the class group you just need to sample one large prime minus delta and that defines your class group and you don't need any secret coins to sample such a class group. Unfortunately neither of these groups are post quantum because there is a quantum algorithm due to Peter Schor for finding the order of a group. In terms of hardness assumptions we require the strong RSA assumption and the adaptive root assumption. The strong RSA assumption has been around since forever whereas the adaptive root assumption is very new, slightly more than a year. But despite its young age it's the complete opposite of the strong RSA assumption. The strong RSA assumption asks the adversary to compute any root of a given random group element whereas the adaptive root assumption asks the adversary to compute a given random root of any group element. The cool thing about these hardness assumptions is that they are falsifiable. So what are groups of unknown order good for? In a phrase they're good for deophantine arguments of knowledge. Deophantine refers to the fact that we consider equations where all coefficients and all variables are integers. For instance Fermat's last theorem pertains to a deophantine equation. Pell's equation is deophantine but your average trigonometric identity is not deophantine. Fortunately there is a way around that. Arguments of knowledge refers to the fact that we're going to represent our computations as a system of polynomials over the integers and then prove that we know the remaining variables such that this polynomial evaluates to zero. Now if you've been paying attention carefully you'll notice that there is a slight discrepancy between what deophantine arguments provide which is arguments pertaining to integers and what we need from the commitment scheme which is arguments pertaining to polynomials. So we need to find some way to represent polynomials as integers in a way that makes sense for linear and monomial arguments. And the solution we came up with is best illustrated with an example. So let's consider the polynomial whose coefficients are 2, 3, 4 and 1 living in the field with 5 elements. The first step is to choose an integer q larger than p preferably much larger than p. So let's choose q equals 10 for our example. In the second step you lift the polynomial to the ring of integers or rather you lift the polynomials coefficients to the ring of integers. This doesn't change the coefficients of the polynomial but it does change how you evaluate and multiply the polynomial because you never reduce modulo 5 or modulo anything anymore. And in the last step you evaluate this integer polynomial in q applied to our example that gives you exactly the coefficients 2, 3, 4 and 1. But notice that this is an integer even though we started from a polynomial over a finite field. This encoding has interesting homomorphic properties. For instance a linear relation applied to encodings of polynomials is the encoding of the same linearly transformed polynomial. And there's a similar homomorphic property for multiplying polynomials. Multiplying encodings gives you the encoding of a product of polynomials. Unfortunately these homomorphic properties only hold as long as there is no overflow in every digit base q. So in particular if q is large enough then there is no overflow and these homomorphic properties will hold. So how do we build that abstract commitment scheme by using groups of unknown order? Well the commitment function is just raising a given group element to the encoding of your polynomial. And this function already gives you linear relationships for free because you can just raise a commitment to the appropriate power to multiply it with a scalar and you can combine commitments using the group operation. However verifying monomial relations is still difficult. In particular in order to verify this concatenation the verifier needs to raise the commitment to the right half of the polynomial to a very large power in order to simulate multiplication by the appropriate monomial. The solution we came up with is for the prover to send both the input and the output of this computation. So he sends a commitment to the right half of the polynomial and a commitment to that right half shifted. And then the verifier just checks this exponentiation relation which is not expensive actually because this is exactly the problem that Wieselowski already solved a year ago with his proof of exponentiation. This completes the picture of the polynomial commitment scheme based on groups of unknown order. Let's move on to the theoretical formalism and the compiler. The theoretical formalism we came up with we call a polynomial IOP where IOP stands for interactive oracle proof, which is a type of information theoretic abstraction of a proof system where the prover can send oracles to the verifier. And the oracles in our abstraction are exactly the polynomials. So the prover sends polynomials to the verifier and the verifier doesn't read these as sequences of coefficients, but rather the verifier has oracle access to these polynomials, meaning that he can query them in points z and he will get the response f of z in return. The verifier also gets to challenge the prover with random scalars and this interaction can be repeated any number of times before the verifier outputs accept or reject. Any abstract protocol that is captured by this formalism we call a polynomial IOP and in fact it's quite surprising that there are snarks that can be captured as such. Now in order to compile a polynomial IOP down to a cryptographic proof system you have to get rid of the oracles somehow. So let's exchange the polynomials with commitments to those polynomials. And whenever the verifier makes a query to one of the oracles the query is forwarded to the compiled prover and then answers with the appropriate values which consist of the evaluation in the point z, what the verifier asked for, as well as a proof of correct evaluation pi. The compiled verifier verifies this proof and forwards the evaluation point to the IOP verifier. And whenever the verifier produces a challenge alpha it's just sent to the prover without interception. Now this proceeds as it would for the polynomial IOP protocol and when that verifier outputs accept or reject the compiled verifier make sure that all the polynomial evaluation proofs check out and if they do and the IOP verifier output accept then the compiled verifier output accept as well. That's all fine and dandy but how do we demonstrate that this compilation is sound? In order to make that argument you have to show that there exists a compiled protocol extractor with access to the same interface that is available to the verifier and in addition with the capacity to reset the prover to an earlier point in time. With this information the extractor should be able to extract the witness of this protocol of this proof system in polynomial time. And that's a bit of a daunting task but you are allowed to assume the existence of an IOP extractor with access to the verifier's view of the polynomial IOP protocol and with the capacity to reset the IOP prover to an earlier point in time. But even that doesn't complete the picture because the IOP extractor expects to see polynomials whereas the compiled extractor only has access to commitments evaluation points and evaluation proofs. There is no way for the compiled extractor to simulate the IOP extractor unless he knows those polynomials and that's exactly why we require from the evaluation proofs that there is another extractor that can extract the polynomials from those evaluation proofs and that completes the picture and argument for sadness. So much for the theoretical background let's consider the last step in our compilation pipeline where we get concrete snark constructions. When we started working on this project there was this interesting result called sonic which is a snark which has a one time trusted setup whose public parameters you can then reuse for any circuit that property is called universal. And it already separates to some degree the information theoretic protocol underpinning the actual protocol and the polynomial commitment scheme and they instantiate the polynomial commitment scheme with the KZG scheme. And so obviously switching out the polynomial commitment scheme for our polynomial commitment scheme gets us from trusted setup snark to trustless setup snark which is really cool. At the same time that we were working on our improvement to sonic other groups were working on their own improvements to sonic giving Planck and Marlin which also feature the same separation and therefore are also amenable to the same switch. So you can obviously also apply the dark polynomial commitment scheme to the Planck IOP and to the Marlin IOP and still get a trustless setup snark or if you insist on using the RSA group you get a structured reference string of constant size as opposed to the linear size structured reference string associated with the KZG polynomial commitment scheme. In terms of numbers for around a million gates and security level 120 bits depending on which IOP you use you get between 10 and 15 kilobytes of proof sizes but the more interesting point is how it compares how does our scheme compare to alternative schemes. So compared to Planck and Groth 16 the proofs are larger but our proof system can be transparent. In comparison to bullet proofs the proofs are larger but our verifier is a lot faster. And in compared to the Fry based protocols Stark and Virgo well they're all transparent and the asymptotics are the same but the concrete size is much better in the case of supersonic. We're at 10 kilobytes whereas they are in the hundreds of kilobytes. This brings us to the conclusion of this presentation. In this work we study the generation of transparent snarks from class groups and we provide a theoretical framework in which such a construction fits. As a result of studying this framework we come to the conclusion that essentially all snarks have a dark analog and can therefore be made transparent if you are willing to trust the security of class groups. Open questions raised by this work have to do mostly with the exact security of class groups. If there are other ways to generate groups of unknown order preferably in a trustless way that would be very interesting to read about. Furthermore by separating out the cryptographic tools we have a polynomial IOP that we can optimize without regards to the cryptographic compilation process. Lastly there's a theoretical question surrounding the security proof in the random oracle model versus a security proof in the standard model that arises from applying Fiat-Chemir heuristic for multiple round proofs. In future work we're going to look at improving the protocol even further and we're looking to have implementation statistics. Thank you.