 OK? Great. Great. Thanks for the introduction. So I'm going to tell you about Balaiapal's succinct noninteractive arguments. This is going to be an example of a long, shy, and benefit-submarine. Today, I'm going to remind you what an argument system is. An argument system is effectively a two-party protocol between a prover and a verifier, where the prover's objective is to convince the verifier to sub-MT statement S is true. For this talk, we're going to consider the MT language a boolean for dissatisfied villagers. In a noninteractive argument system, the proof is a single message from the prover to the verifier. There are two main properties that we might care about that we consider noninteractive argument systems. For instance, completeness says that an honest prover should be able to convince an honest verifier that the statement is true. Soundness, intuitively, says that a computationally bounded prover should only be able to convince an honest verifier of a false statement with small probability. To provide, and concretely, in this talk, we're going to focus on the following definition of soundness, which says that a prover who is modeled by a boolean circuit of size two to the lambda, where lambda is a concrete security parameter, should only be able to convince an honest verifier of a false statement with probability that is inverse exponential in a security parameter. By giving this concrete definition for soundness, we're able to perform comparisons between different constructions of argument systems. Next, we say that a noninteractive argument is succinct if, moreover, the arguments and the size of the proofs are very short, namely the length of a proof is polynomial in a security parameter and polylogrithmic in the size of a circuit. And in addition, the verifier's algorithm can also be implemented by a boolean circuit whose size fails only polylogrithmically in the circuit size. In particular, the verifier complexity in a succinct noninteractive argument or a snark system can be significantly smaller than that of a classic anti-verification algorithm. There are many such instantiations of succinct noninteractive arguments. For instance, Michali celebrated a computationally sound or CS proof gives one such instantiation in a random or a model. Alternatively, we can consider the common reference screen model, where we assume the existence of a trusted set of algorithm that generates a common reference screen here due to the sigma and a verification state cal. In a common reference screen model, the proof of the algorithm is now defined with respect to the common reference screen and the verification algorithm with respect to the verification state. We can consider many different variants of this model. For instance, in some cases, the verification state can be public in which case we say this resulting argument system is publicly verifiable. Or in other cases, it could be secret in which case we get designated verifier or secretly verifiable starts. In this talk, we're going to focus primarily on building designated verifier starts. In addition, sometimes we allow a pre-processing step that generates the common reference screen to run in time that is proportional to the size of the circus that we're verifying. In this sense, the pre-processing algorithm is so-called expensive due to its proportion to the circus size rather than polylogic risk rate. In this case, we say the start is a pre-processing start. This will also be the focus of this talk. So having defined the notion of a succinct non-interactive argument system, there are several natural complexity notions that we might be wondering about. For instance, if we want lambda bits of soundness, how short or how succinct can this be? Here, a trivial lower bound says that if I want lambda bits of soundness in a publicly verifiable setting, the proofs better be lambda bits, otherwise the prover can then just guess a proof and succeed with some notices of probability. It turns out that this even holds in the designated verifier setting, assuming that empty does not have succinct proofs. I'll refer you to the paper for the whole details in the very straightforward argument. Next, another metric that we might be interested in is prover complexity. How much work does the prover have to invest in order to generate a proof of a true statement? Here, a simple lower bound is that amount of work that a prover has to do is proportional to the size of the circuit is trying to generate a proof for it. So having defined these two notions of complexity metrics by which we can measure the performance and efficiency of a start, we can define the notion of a quasi-optimal start, which is basically a start that simultaneously minimizes both of these quantities and simultaneously minimizes the proof length as well as the prover complexity of the polylogic factors. Concretely, it means the following two properties should hold. The proofs should be what we call quasi-optimally succinct, the length of a proof. It's only quasi-linear in the security parameter and that is sufficient to achieve lambda bits of soundness. In addition, the amount of work that the prover should invest to generate a valid proof of a true statement should scale only quasi-linearly with the size of the circuit. There will be a new allow for an additive term that grows polynomially in the security parameter and polylogically with the circuit size, but the main factor that we care about is this bottom-linear scaling with the size of the circuit. Having defined these complexity notions, let's look at some of the existing start candidates. This is by no means an exhaustive list, but it should give you a sense for how the asymptotics of the different candidates we have. First, we have the CS proofs due to Michali, one of the first instructions of the succinct non-interactive argument. It turns out that with a suitable instantiation of a hash function, CS proofs can actually positively give us quasi-optimal prover complexity. And here, to simplify the comparison, I'm gonna drop these low-water terms that depend only polylogically in the circuit size. So with CS proofs, we can get prover complexity that scales only quasi-linearly with the circuit size. However, CS proofs do not provide optimal succinctness. In order to provide lambda bits of soundness, the length of a CS proof grows quadratically in a security parameter. Alternatively, we can consider start systems that have been proposed more recently, either due to concrete assumptions over bilinear math or over lattice-related assumptions. All of these constructions actually provide quasi-optimal succinctness. Namely, to achieve lambda bits of soundness, the proofs in these candidate systems are only about lambda bits long. However, none of these systems, for reasons that I will soon describe, provide quasi-optimal prover complexity, namely, the amount of work the prover has to advance to generate a proof as depends multiplicatively with the security parameter. And the main focus of this work is to actually build the first quasi-optimal succinct non-interactive argument system, namely, an argument that simultaneously minimizes both the prover complexity as well as the proofs size. So just to summarize, the main contribution of this work is we get a new framework for building preprocessing starts and this follows an extent upon the frameworks by the TAS-DFO, as well as our own framework, from earlier, from Eurocrypt of last year. Our framework, like all of the previous ones, follows in two main steps. First, we identify a suitable information theoretic primitive. In this case, it will be something called a linear multi-prover interactive proof, or linear beam, I won't define this later on. And in this work, we get the first candidate construction of a quasi-optimal linear beam. Next, having defined this information theoretic primitive, we left with a cryptographic tool to compile this linear beam into a preprocessing start. This is the second ingredient of our compiler and in this work we show how to use leveraged cryptographic tools to go from linear beams to preprocessing starts. So combining these two primitives, a quasi-optimal linear beam and this linear only vector encryption scheme, we can actually obtain the first quasi-optimal start from a concrete cryptographic assumption, the existence of a linear only vector encryption. So the core building block in this hour work is this new information theoretic primitive. And to introduce it, I need to go and define some preliminaries. So let me remind everyone what a linear PCP is. This was a notion for a normalised and introduced in a work by Isha'i Hushilevitz and Ostrowski. So a linear PCP is very similar to a standard probabilistic checkable proof or a standard PCP, in that the verifier gets access to this proof workflow in the sky. In the traditional PCP model, the verifier gets to carry bits of the PCP and it needs to decide. In the linear PCP oracle, the proof can be viewed as a vector. It implements a linear function. And the way that the verifier interacts with the linear PCP is it actually submits query vectors to the linear PCP oracle. And the oracle responses is going to be the inner product between the verifier's query vector and the linear PCP proof. The verifier can submit multiple queries to the linear PCP oracle and at the end, it decides whether to accept or reject the proof. There are many concrete instantiations in the linear PCP from different kinds of primitives, and such as the Walsh-Hodamard code or by the Quadrat Spam program of Genaro oracle. One of the most useful properties that these linear PCP satisfy is that the verifier is actually oblivious, namely the queries that the verifier submits to the linear PCP oracle do not depend on a choice of a statement. It turns out that this is very useful for building snarks, as I will show next. The way it works is going to be as follows. Because the verifier's queries to the linear PCP do not depend on a statement that's being verified, what the verifier can actually do is generate these queries ahead of time and encrypt them and publish them as part of a common reference screen. In this case, the verifier is going to use what's called a linear only encryption scheme to encrypt its queries. Intuitively, a linear only encryption scheme is an encryption scheme that's a additively comomorphic thing that you can add and multiply cyberpacks by scalars. And secondly, the only kinds of operations that is supported by a linear only encryption scheme are these linear functions, computing, adding, and multiplying by scalars. So the way that we can now leverage this linear only encryption scheme to build a snark from a linear PCP is as follows. The verifier's going to come along. It has a statement and it's witness. It's when it generates from that a linear PCP encoding of a statement, witness. And now it's simply going to implement or simulate the operation of the linear PCP oracle. It's going to pretend that a verifier actually submitted this query to the linear PCP oracle and it's going to compute the inner product between its proof vector and each of the verifier's queries. It can do this because the underlying encryption scheme supports linear comomorphism. This in turn is what is the snark proof. So if it turns out that a number of queries that the verifier has to make is constant, something that is satisfied by a Disney linear PCP in this situation, then what the proof contains, or the snark contains, is a constant number of ciphertexts. So if we use an encryption scheme that provides lambda bits of security, then the resulting snark proofs consists of a constant number of ciphertexts and therefore the length in linear in the security parameter. However now, let's consider the prouver complexity. What the prouver has to do is essentially evaluate all of these inner products between its proof vector and all of these encrypted variable vectors. With the Disney linear PCP constructions, the length of the linear PCP is linear in the size of a circuit destiny check. And if we have the underline adequately homomorphic encryption scheme or linear only encryption scheme that we use provides lambda bits of security, then each ciphertext is lambda bits long and so the amount of work that the prouver has to do is essentially lambda times the circuit size. Effectively, the prouver overhead is multiplicative in a security parameter. Another way to look at this is essentially the following observation. We pay a cost of order lambda for every homomorphic operation we do and if the number of homomorphic operations that we have to perform is proportional to the size of the circuit, then the overall prouver complexity is not going to be, it's going to be suboptimal. It's going to be larger by this lambda factor. And so the way that we're going to overcome this barrier is we're going to look at a different kind of linear only encryption scheme. Instead of encrypting a single field element at a time, we're then going to encrypt a vector of field elements. Where if you prefer a more algebraic view, we're going to consider an encryption scheme where the underlying text space is a polynomial ring that splits into a vector of field elements. As a picture, it looks like the following. Instead of encrypting one field element, we'll instead encrypt a vector of field elements. And now the homomorphic operations over this plain text space really corresponds to addition and scalar multiplications of entire vector of field elements at a time. Now the interesting thing is that we can actually instantiate these kinds of linear only encryption schemes using encryption schemes based on a ring learning with errors. And the nice property is the following. We can actually encrypt the entire vector containing lambda field elements in a single cipher text that is also roughly order lambda and slot. Effectively, this means that when we perform these homomorphic operations over these packed cipher text or these encrypted vectors, we're only paying polylogism overhead per every elementary field operation. This is what's going to allow us to achieve polylogism overhead when it comes to the proof record. Graphically, what the state now looks like is following. When a verifier encrypts its queries, it can now encrypt an entire vector of queries within each cipher text. And what the prover can now do is apply a different linear function to each of these queries. So somehow we've departed from a traditional linear PCP model where the prover has to apply a consistent linear function, the same linear function to all of these queries. Now it can operate on all of these slots independently. This is useful because it reduces the advertised complexity of these homomorphic operations. But at the same time, it's now giving the prover more power. It's able to evaluate inconsistent and different linear functions in each of these different slots. So this suggests that we need to move away from the linear PCP model and look at something more general. And so the core information's directed model that are primitive that we're going to look at in this work is a notion called linear multi-prover interactive group, or linear BIM. That was also introduced in the original work by Ishaia Paul. In a linear BIM model, instead of giving the verifier access to a single linear PCP protocol, where instead when it gives the verifier access to many independent linear functions, here denoted by both of them, the verifier's able to interact with all of these independent linear functions. And at the end, it decides whether to accept or reject. Now, the way that what we're going to show, but I won't have time in this talk, but what we show in the paper, is that you can actually compile linear BIMs to pre-processing starts using much of the same machinery, using linear only encryption, where we input vectors at a time rather than individual neo-delta bits. So now suppose for a sake of argument, that we had a linear BIM that has time to satisfy the following properties. The total number of proofs here is only ordered linear in a security parameter. And moreover, each of these proofs is essentially the size of a circuit that we wanted to verify, divided by the total number of proofs. So with the size, the proof is still going to be circumcised, but instead of giving one key, we're going to check when it chops a single proof up into L-different proofs, each with size that's the circumcised divided by L-different. And moreover, suppose the verifier only makes polylab respect many theories to all of these different proofs. Then we say that this result in linear BIM is quasi-optimal. Why do I say that? Well let's look at the complexity for the prover to respond to the verifier's queries. The prover complexity in this linear BIM setting is basically going to be proportional to the number of proofs times the side length of each proof, right? Because basically on every query, the oracle is going to reply with the inner product between the proof and the query. And so if each query has size C over L and we have L proofs, that means that the total amount of work that the prover has to do is roughly proportional for quasi-linear in the size of the circumcised. And if we look at the total number of responses that is going to be computed, well that's going to be proportional to the number of proofs times the number of queries, which is now quasi-linear in the security parameter. So in this work, the core focus is building a new quasi-optimal linear BIM, which then can be compiled and moved strapped into a pre-quasi-optimal pre-processing start. To construct a quasi-optimal linear BIM, we rely on two core ingredients. First we have to rely on what's called a robust circuit decomposition, which I will sketch in the next few slides, combined with a consistency check mechanism. And together, these two ingredients give us the first quasi-optimal linear BIM construction. So let me briefly describe for you what the robust decomposition primitive does. We're going to start as input, we're going to take as input a Boolean circuit of size S, the circuit that we want to verify. And what we're going to do is we're going to split it into many smaller circuits or split it into many smaller constrained functions. So instead of checking satisfiability of the big circuit C, we're going to check satisfiability of each smaller constraint. Here we have L constraints, and we're going to assume that each of these constraints can actually be verified by a circuit of size S over L. So we're taking the circuit C and decompose it into many smaller subcircuits. Next, we take the original statement and witness for the big circuit C here, and we encode it into a statement and witness that will be used for each of these smaller constrained functions. Next, each of these constraints is going to read a small number of bits of the encoded statement and witness here. And the properties that we want from a robust decomposition are as follows. First is the completeness requirement, which says that if we have a statement and witness that satisfies the original circuit, then the encoded statement and witness will satisfy all of these different constraints. Next, we have a robustness requirement that's the name of a robust competition, which says that if I have a false statement, if I have a statement that is not satisfied with the original circuit, then no matter what witness I use, only a constant fraction of these constraints can actually be satisfied, only a two-thirds fraction can satisfy me. And finally, there's an efficiency requirement, which is that the encoding function that takes the statement and witness for the large circuit and produces a statement and witness for these smaller constrained functions, this can actually be computed efficiently, namely in quasi-optical time. Okay, so why does this robustness competition primitive help us in realizing a quasi-optical snark or in building a quasi-optical linear game? Well, for the following reason. What we're going to do is the following. We're going to take our statement C, or our circuit C, that we want to check satisfiability of. And instead of checking satisfiability of the big circuit, C, we're going to check that each of these smaller constrained functions are satisfied. And the way that we're going to do that, if we can actually just give a standard linear C proof of satisfiability for each of these constrained functions. In particular, using linear PC keys based on quadratic span programs, the size of each of these individual proofs is precisely equal to the size of the circuit C divided by the number of constraints we have, which in this case will be order security parameter. Each of these underlying proofs now provide one over polynomial and soundness error. To check the proofs, what the Marifier is going to do, it is going to check that all of these individual constraints are satisfied. So why does this new system provide completeness and soundness for the following reasons? Complete this follows just by completeness of the decomposition and the underlying linear PCP instances that we use to argue satisfiability of each of these smaller constraints. Soundness follows for the following reason. Each of these individual linear PCPs provide polynomial soundness. So suppose we have a false statement, right? The robustness condition says that for a false statement, only a constant fraction of these constraints can be satisfied. And so if each of these linear PCP proofs actually provide polynomial soundness and we have order security parameter number of proof, then a constant fraction of them will actually fail. And so we actually get soundness amplification. The verifier will detect a false statement with probability that it's inverse exponential in a security parameter. Or what lines is that? Who do the lines? This argument doesn't quite work because for the following reason, our robustness condition solely says that there's no consistent choice of witness that can simultaneously satisfy all of these different constraints. However, the malicious prover can cheat us by using a different witness when arguing satisfiability of each of these different constraints. And this can actually break soundness. So the second privilege that we need to make this argument goes through is the consistency check mechanism. We need to somehow bind the prover to actually use a consistent set of witnesses in order to argue satisfiability of each of these smaller constraints functions. The way that we're going to do that is we're going to impose an additional property on the underlying linear PCPs, namely that they're systematic, namely that each of the linear PCP proofs actually contain a copy of the witness that was used to generate that proof. And so effectively, the way that we can view this is that each of the linear PCP proofs actually induces an assignment to the bits of a common witness. And the consistency check mechanism simply verifies that the prover uses consistent witnesses in order to construct all of these different linear PCPs. I'm going to go into the details I'll refer to the paper for that. So just to recap, the way that we build a quasi-optimally-near me is we begin with a robust decomposition mechanism that takes a large circuit C that we want to check, and instead of checking satisfiability of the big circuit, we're instead going to check satisfiability of all of these different constraint functions. In the paper, we show how to instantiate this primitive using MPC and the head combined with a very efficient MPC protocol. I won't have time to go into the details there. The second ingredient we need is a consistency check mechanism that forces the prover to use a consistent set of witnesses in order to argue satisfiability of each of these smaller constraint functions. And together, we obtain a quasi-optimally linear me. So just to wrap up, in this talk, we focused on a notion of quasi-optimality for succinct non-interactive arguments. And these are argument systems that simultaneously minimize the proof size as well as the prover complexity. We gave a new framework for building these quasi-optimally starts by conspiring linear me with linear only vector encryption. And we gave a construction of a quasi-optimally linear me using a robust decomposition as well as a consistency check mechanism. In the paper, we also explore what happens when we push the sickness to the limit. What if we had a notion of a one-bit start that provides sound as one half? It turns out that such starts actually have surprising connections and implies a sweet form of witness encryption that suffices for numerous demographic applications. And this really highlights a connection between soundness and confidentiality. And there's another coming work by Berman et al and Crypto this year, which also explores some of these similar types of connection. They start by using zero knowledge and show that it implies publicly encryption. I encourage you to look at that paper for more details as well. And finally, let me give you a few open problems raised by our work. One problem is that all of the constructions I described in this work are in a designated verifier setting. So a natural challenge is trying to publicly verify both starts that remain quasi-optimally. Perhaps the first step or step is actually to realize what are called multi-thero designated verifier starts. So the constructions that I just described are only secured in a single theorem setting. It would be interesting if we can extend them to work in a multi-thero setting. Another challenge is trying to get zero knowledge for these starts without destroying the quasi-optimally underlying instruction. Thank you very much. So in this, the composition of the circuits is to constraints. So the parameter L, is it dependent on the security parameter or? Which parameter? They are the number of constraints in the label. Ah, yes, yes. So in a robust composition, we take a large statistic and we decompose it in the water security parameter. Yes, so that, I mean, that it maybe would be more efficient and more effective. It should actually be dependent on the circuit on the side of the circuits. So I don't know, this could be like a smaller position. Right, yeah, that's a good question. So when we actually compile something, it goes to a pre-processing start, we have to tap all of these different constraint functions into a single cybertext. So if the number of constraints is now proportional to the size of the circuit, then we have to encrypt a vector of length circuit size for each of our cybertexts in the CRS. And then the result, the length of each cybertext is actually not succinct anymore. It's going to be proportional to circuit size. You mentioned that you can just encrypt the length and the distance length. So it means it should satisfy more, more, more, more, more, more, more, more, more, more. That's right, yes. Yes, so linear only just has the only kind of homomorphism supported by two different things, with our added homomorphism. And we can formalize it using the notion of an extractor. So there's an extractor that extracts a linear function that explains any address error that you can hear outside of it. Okay, so now if you want to distribute your announcements and then...