 from Metro Labs. Even well, we need open source to start with and everything. We'll not show a lot of pieces of the code because this one's dead. Interesting. We'll try to explain all parts of work for those who were not. I'm going to tell them we have a blockchain rig where the stacker has their workshop. We'll briefly explain. You need it for recording, I can just speak more loud. So first, for those who was not on the Tel Aviv blockchain rig, we'll use the short review of how stacks work and where you should pay attention. So you don't make the system sound but you make the code. We'll focus mostly on the most powerful committee of the stacks, which is pride, attractive for a group of proximity and we'll, instead of showing the code, we'll want to show some application of the stack. Initially, we wanted to show how people build the stack, which is unfortunately not a stack without a huge modification. And also, when it releases here, we can quickly discuss application for BDFs, how we can use stacks for BDFs in terms of application of stacks. At the end of the presentation, we'll also show some part of the weakness scheme which you can add potential improvements to it and difficulties while you're in it, but just naively make it. So the level will be quite intense so just be prepared for it and be aware. Cancel to the release of the presentation may be able to answer the questions if something happens. The format is, if you don't understand something like you really want to ask a question, you should raise your hand and we should stop and answer your question. We won't speak about our implementation of stacks called hoarding because of recent appearance of new proof systems such as PLONK and because of our work on commitment schemes we decided to give the motivation of our work and cause it has its roots in stacks. I will start from the definition of stacks, show how they will look and then I will try to show how some components of stack mainly the final step the fry can be combined with for example PLONK to achieve fully transparent and succinct snack the next one. The roadmap is the following reminder of stacks I will briefly describe all the steps. The first is the implementation called air in the second and the fry itself and then I will speak about what the economic commitment scheme is CAD commitment which is mostly used now and how we can build new commitment scheme from fry and then I will try to speak about possible applications of our commitment scheme to make full transparent stacks. Okay, so let's begin this is the formal definition of algebraic intermediate representation. It may look quite weird but I will just explain on the board what does it exactly mean and you will see that it's pretty simple. Okay air is just kind of register machine so you choose your parameters F which is just find field which should be residue field so the prime field in other words and W is your number of registers so just fix them and then double them and T is just the length of your trace is the number of steps that your program just execute and at every step you just modify the contents of every register so in fact your program is encoded as a matrix of size T cross W but the question is what are the possible ways to modify the content and in fact this is how the stack works there is some kind of restrictions on how to modify the values on each step we should apply the same set of polynomial functions let them call the trace constraints let me call them p1 p2 up to pk and they should be applied at every step every set of polynomial is a function in two W variable the first W part of variables is just the oldest state of registers and the next one is the new state of registers and the most simple example I think many of you have shown it many times it's a few energy sequence so the first register is just 1, 1, 2 and the second is 1, 2 3 and so on so the new second register is just the sum of the values in previous registers and the function in two W variables is x1 plus x2 is equal to 0 what is x what is y? x I call the older value of registers to be x and the new value of registers to be y so I think this is x and this is y x1, x2 y1, y2 so we just apply the sum function and the new value of y is just the previous value of x and the same set of functions is applied at every step but sure in real applications for example x is the previous value and y is the new value and the same set of functions is applied at every step and of course just here we have just many functions but in reality if you try to encode some weird circuits because you will have polynomial relations which can be cubic, quadratic it doesn't really matter and you can mix the x and y variables as you wish and the main the main thing here I want to stress is the difference from a run one constraint system when you try to encode some gadgets in run one constraint system for example hash function and then when you try to use it in your circuit several times you will need to pay every time you use your hash function so your circuit contains three instances of the same part of the same hash function then you will have a triple number of gates and the difference and the situation with taxes is quite different because you just encode your gadget only once and then you may use it repeatedly but you need to have some pattern of repetitions so the same set of constraints should be applied at every pair of steps for example at every the distance between the steps should be just one or two but there should be some kind of repeatable pattern so you guys how many of you are familiar with run one constraint system okay less than a half year doesn't make any sense and the witness is just the particular instance of matrix the values of the matrix are just taken so that those relations will hold that's it by the way I have only spoken of trace constraints another thing is how to embed public inputs and this is done by boundary constraints they are just tuples of coordinates and value the first coordinate is just at the step at which to apply this boundary constraint the next coordinate is the number of registers and the value of alpha so the boundary constraints just says that the value of this set of matrix is alpha this is quite simple I think the next one can you go back with the last two lines over there no no this is the next one can you go back yeah just second sentence you need all your constraints should be satisfied because p is a function every element of p is a function in two double variables and the first half of variables is the previous row of matrix and the next one is the next row this is what is written the last line any questions now the next one as we know the best way to encode any proof systems to reduce them to reduce even the witness to polynomials now we have just the boundary constraints and the trace constraints represented in form of polynomials but our witness is just matrix and then we need to somehow transform our air representation into the form which is most suitable to just to stress itself and we need the kind of routing the routing means that we want to place the values of our matrix which is our witness to encode into polynomial again this is just the definition again it looks weird and now I will explain what can be done in the simplest case in reality there are some difficulties nevertheless the main thing is the following we choose two groups we have just our finite field and we take two groups G1 and G2 inside multiplicative subgroup of our field and the first one should be of size T and the second one should be of size W so in practice we want to be we want our field to have a light to our density so just it's multiplicative group the order of multiplicative group is divided by a light power of 2 and our trace length and the number of our registers to be in powers of 2 if it is not the case we can just simply extend both practices there is no problem with it and then and let G1 be the generator of the first group and G is the generator of the first group and nu is the generator of the second group and we just runomerate the cells of our matrix with elements generated by these two groups of this cell G1 G1 raised to 2 and so on and this will just use the variable so the cell with index i j will be rated with g i nu now that's you catch the idea and then I will take the function f on the domain G1 cross G2 to have this values in f and the value on g i minus 1 nu g minus 1 will be just the value of this cell then we see how to and this is this will be our witness just for this is very formal definition for those who would want to do the practical application of this and just understand how this works this formal definition it just says take every row can coordinate them all together and do the interpolation over the multiplicative cell group in a prime cell it's just very simple well you can see the same structure in many systems where you want to encode your witness as a polynomial so you literally place your witness values you say there exists a witness polynomial and I want to require values of this witness polynomial on certain points to be equal to my witness values if you really do the same you just take every row you can coordinate them all together you get the polynomial you get set of values which is size t by w so you just need a subgroup which is t by w size and do a simple interpolation which is just a fast Fourier transformation from a perspective and less formal definition even this one doesn't show the full essence well this one is too complicated explanation of actually how it works the simplest one is just do interpolation you treat them all as points you treat them all as values as a point this is a point and while the previous matrix contains a value at this point is this your requirement for polynomial and you strictly define the polynomial as 3t by w minus 1 by t by w and optimize the implications you don't think you can coordinate all together because this is going to be a large single of t you can say I will have not one but w witness polynomials this doesn't change the essence of the start and you say my witness polynomial number one is just for register number one so you do larger number of interpolations but each of those is going to be much smaller size this is a solid optimization which is used everywhere which is used by static wire biased by other x guys this is an efficient way to have them implemented but here we don't do too much about the optimizations because there are too many of them literally we just try to explain the essence of the start you it's not the essence of the start itself it's this common technique to incorporate the witness as a polynomial of the polynomial instead of relationship of the values can you explain the next step for this the last long equation comes from why do we just encode this way remember that every trace polynomial is applied to every pair of constraints then just what will happen if we take our mask to be just the consecutive values of the group of the second group I will just write it down f new one x x up to the last one it's just the previous set of registers the u values are new one x squared x f so the first this is polynomial p is in two double variables the first set of values that we just put it upon the most f the consecutive values of the second group multiplied by x and next we just take the same function and multiply not only the elements of our first group but also by the generator of the first group then what will happen if we take x to be just unity to be just unity is an element of the field then these values will be the first row of our matrix these will be the second row of our matrix so we take x to be one then we just encode our constraint on the first pair of rows it will take x to be generator of the first group then we encode the second pair and then we take the square this way it will be encoded the third pair of rows and so on so this equation will hold for all elements of our first group then we just return back our mask our set of masks those parameters our constraint polynomial is the same is the sample number taken from representation and our domain our domain is just the set of elements of the first group and so our domain polynomial q will be just the project x minus every element and we know that this element form a second group so this will be written as a single form x t minus one and this is very important this is very important fact that our group has such that our polynomial has such complex representation because it should be calculated efficiently so we exploit the structure of with one exception here this polynomial will hold on every except the last one so it should literally also hold on this one but there is nothing after but we are in a cyclic group so we would literally link the last row to the first one where relationship doesn't hold so you have to cut one root out of there but it's still efficiently calculatable another possibility is to extend your trace yeah but this gives you a bit additional yeah just to just for some well this is a strict definition of works just for intuition the same trick was used by zacher and dariel give his own in their plumb proof system that you can multiply the argument of the polynomial by the generator in certain cases and this gives you by like a displacement a time displacement or just let's go back to Fibonacci example with one one one two two three I will only work on this part of these two rows just for now by definition of the witness polynomial over x and let's say I use it omega as a generator or maybe in the first equal to one because I have four elements here by definition this one would be this cell one would be this one is another one well not to be representative but to be here and this one is two so I just denoted them one by one by so why it's called it allows you to navigate in space by selecting a certain time by selecting the time step you will run as a real application where the witnesses for separate registers will not need to navigate in space, still need to navigate in time so what this is an efficient function over x what happens if I look at the function of x multiplied by omega squared x will actually give me let's say omega is 0 so it's 1, 1 here omega squared it will give me 1 and it will give me this because if I take my multiplied function at omega first power give me 2 so I literally displace my time by one step that's why it's called to vote even I choose not the easiest example to see too many ones here but if you do the same for the next query for us it will be much more representative but this just allows you to navigate yes for this you just everything you want from practical perspective well you can transform polynomial in this form over x into polynomial displaced one by very simple and efficient parallelized collaboration which it takes a linear number of multiplications this is also rated oh ok yeah so what you would have in the practical implementation with all the optimization you don't need to navigate in space even while in the original papers everywhere you would say you take all your witness put it all together use one with this polynomial you never do it in practice so you only need to navigate in space everything else falls to be precise it is not from the original paper but by the next one which is called the deep techniques well it's something very different just a question about the other so here when you say for all these in F so F here is the field that you work over or only the subject oh no it's for all x for which the other main polynomial is equal to 0 so where you have the one issue polynomial but I mean it's the part that is the next one by the way sorry it's precisely like the sketch there are some technicalities second out of the main which I don't want to speak about just sorry no ok from stuck where but what's going on remember from the previous slide the previous slide that if x is a root of the polynomial then it should be the root of this large construction and here this large construction is taken as a numerator and so on every such on every x from our domain which is just the elements of the first group this fraction should be a polynomial and we just take random conditions from the verifier in order to combine all those functions into one so that technique so what do you mean by fraction actually here just to explain the meaning of this so you see that how the polynomial is one issue but if you can space full on this fraction I mean literally this means you can space full for a witness so it means that for every x for which we chose a simple example it is equal to 0 and the polynomial is equal to 0 so you can divide one over another because it has a simple literally at this point it means that if you do this division then this construction is not a rational function but it's a polynomial of a degree which you can predict by constructing a system for those who are technical perspective you want to have this constraints below degree so this polynomial which will be part of the fry will not go to very large degree so if you have few constraints like small number of constraints with large degree you would rather prefer to break in constraints which is degree 2 well 3 is not good because you always work with multiples of 2 so like 2 or 4 but not large from optimization perspective here is just an equation why you can divide this operation will give you a polynomial which is important for next step of the fry but not some function because you can define it as a function but you will then g of x will not be a polynomial can you remind me what was p and q well p is the p is implanted of constraints now you just not take x and y as but you say well I replace my x1 by taking the witness polynomial which is properly mastered here is just a one issue polynomial it just defines you if you want to hold on every row except the last one it will have to support yeah this is a definition of finding polynomial this thinking is used everywhere even in the growth of 16 I think magical once again I want to stress what I call as it is written in the paper but it's nevertheless the main idea so our problem is reduced to the following we want to show that the function g gx is not a rational function but a polynomial of degree which is the same that the order of the first group and we need some tool just to check this efficiently the next one and the problem which we are trying to solve is proximity testing we just have some access to a function f I mean Oracle access that means that you may only query some values so just verify some values from the function and the proof just gets the evaluation at some points and we want to and the proof I want to know if this function is indeed a low degree polynomial so just for definition of Oracle I think at least in the graph so let's talk about efficiency so you want to have the fault you could have this way solved well you can prove that geofax is a low degree polynomial because it actually gives you a degree bound but we are not here for this so what you want to do today is efficiency between communication between prove and the verifier so you don't want and to check this relationship everywhere you need all the values and do linear number of calibration so this is why it's called Oracle access first of all your prove tells I know some set of values and I commit to them you mentioned it's just set of numbers which you break apart number by number and give it to someone you can ask for those numbers one by one and those will not be changed in principle you can ask for all of them but then you do the work which is enormous but you can pick one by one number of values which you will pick from this trusted source and in reality this is trusted source those set of values will be enough for you to say whether all of those values came from some polynomial all of those values together with some prove which you will get will come from the word from the polynomial which was small degree or unlikely to be small degree so the front part is an informal part don't make any sense yeah there's no need to explain it in other words no okay next one so the new answer was as Alex told to query all the values of our function but the domain is really large it will be enormous number of queries and this is the thing we are trying to avoid and the question is can this be achieved with logarithmic number of queries where GE as you remember is the degree of our witness F the answer is yes this is really ingenious protocol the form of definition of the problem we are trying to solve by working in interactive protocol proof model anybody is familiar with interactive protocol proof model okay then I will try to explain interactive proofs I think everybody knows PCB PCB model the proof is data the proof is data is not sent just as little pieces of information they are sent in terms of oracles or large tables and they verify when he received this table as large table he can only query a small number of points this is called PCB model and interactive oracle proof model is just combines those two goals this interactive protocol from which every message from the proof is a kind of oracle or a table from which they verify the queries on the small amount bits sends some response and the next answer from proof is a data table and when we speak about interactive oracle proof proximity the first message format should be our oracle to the function F which want to prove that it is polynomial for degree and and in fact every the oracle will also be a relation of some function when speaking case of write you really wanted for the people with the hermene crystals it's like yeah here you have domain D you have two functions F and G any functions not polynomials and then just count the number of points in which they are different here for example is two and I just take the relative time distance so I divide this number by this as a domain so that my metric will be between 0 and 1 and this is just a metric it has all properties of metrics symmetry the triangle in quality so you are basically talking about ring cell model of course yeah fry fry works with them the next one these are the parameters of fry and it's just exactly takes logarithmic number of queries but what I want to stress here is the soundness soundness uses some parameter delta our initial function our initial function oracle 4 it may not be a polynomial it may be some function some function F which is which has some distance from the space of read Salomon codes and fry is unable to distinguish precisely the functions which are exact polynomials which which has some distance from the space of read Salomon codes I just I think here we will do just another solid example so for initially problem we have a polynomial let's just say we have a polynomial with an order of x we get it somehow from the prove so for for capital it will be a polynomial like we already know that this is a polynomial just to show you how the fry works we get it so from the main we get a relation so let's say our domain is small it's just 4 elements so 1, 2, 3 and 4 I think I have a function for a small letter it's just some function for this well 1, 2, 3, 4 wasn't the right choice but ok let's and I'll say I know that the degree of this polynomial let's say it's equal to 0 so I will just have this a lot of 1s and have an order of x as to this too let's say this is different like this it's no longer a polynomial of degree 1 it's what's like a polynomial of something it's a short feature of other spaces so those are my in principle oracle accesses I would kind of simulate them as a table where I can query for a value so what would I do and what would I get as a result of the fry particle I kind of reach all the values at once because if I would I would just interpolate this locally by myself for us it's worse because the values I got were from a low degree polynomial just from some function so if I would get this in full I would interpolate this and see if well this is indeed the polynomial of degree 0 just the constant for this one I would do the same work and I would get a degree which is more than 0 but I cannot have an access to everything so I can hope to get this brief answer so I can only get an answer with some probability and for a delta parameter delta parameter will tell me how many points I want to be sure that this polynomial doesn't deviate from some, not just from the exact one but from some polynomial of degree 0 under the same oracle so there's two evaluations or two oracles, they deviate in one point so in this case my delta is one force so let's say I want to distinguish between two cases where it's exactly the polynomial of certain degree up to certain degree or when it's one force close, so it's only different than one point in my existing case but not further, so not in two points I'm ok with one and not more for this so I fix my parameter delta which will tell me how many steps of supply for the client to run which adds into the day from practical perspective will tell you what is your process at the end of the day but let's say I want to be less sensitive so I'm ok with delta being one cuff so I will allow someone to give me a volume, it's not exactly from the polynomial but from some function which deviates in two cells here then my delta is larger and the property of the parameter is up to certain conditions which will not come here it will give you larger sounds and the device will potentially be smaller for if you want to be if I want to be sure that this is only different in one cell with let's say 100 bits of security my probe is going to be larger then in the case if I want to be sure that it's different not in one but at maximum at two cells with the same 100 bits of security this set of delta parameters is important we will show why it's important to learn the latter steps but also improvements of the prior protocol at least for now they were to improve the soundness so it improves the second term in this minimal function and also say what is like, what is your delta up to which you can even use the parameter but we will not touch this in too much details, yeah I have a question so how can you melt the number of points that this polynomial deviates to the bit of security well I mean this is the soundness error it basically tells you what is the chance that you are cheated if you want to explain to explain all the parameters here but in fact the only thing that is important and the thing that on the care of this parameter delta we decide how much according to this parameter delta as Alex told it means how much year we can take so this is the space of the sound codes and this is just the delta blowing in this space we take just the circle of radius delta and this is a function which is just in those delta blowing for this function with parameter delta will pass right of course it doesn't mean that this function is polynomial I want to stress not this why nothing why this is not important for stocks as I have said we need to show that some functions are with this function is a polynomial but if normal delta the prover is able to to pass fry then just this means that he knows some function which is in this delta radius but this is not a polynomial and the only witness for stocks is a polynomial but it doesn't matter if we have decoding algorithms such as the Sudan decoding algorithm or Walsh barrel camp algorithm which is the proof of knowledge they are just very efficient polynomial and what they may say if just proof who knows not on the oracle but the function itself if he is able to provide any function which is close then he will just use those algorithm decoding and he will get the polynomial which will satisfy all the conditions but this is the truth this is actually the essence of at least deep alive protocol so here everything we are taking before was about the polynomial I mean we assumed that the prover was honest but we just encode the original problem this ends today because this is a probabilistic problem so what we want to say is let's say since the verifier is satisfied with all the conditions like all the checks we have put here it means that certain values from the oracle and in principle prover knows the oracle in full we assume that the prover knows the oracle in full so he has access to all the values locally he can do whatever he wants with it and he is satisfied with the verifier even if the oracle was not exactly the values of the polynomial but they were just close and there is an algorithm for the prover himself to take those values decode a polynomial from them and this polynomial will be the true witness so even even if the prover didn't know the witness but he passed all the checks with all the the normal number of bits of security he in principle would be able to take all those values and find the true witness easily in a polynomial type so this is the proof of knowledge and if prover has satisfied the verifier then he can get the knowledge of the true witness this is not very intuitive but in principle this is the proof of knowledge I think we need to speed up so just limitations which is quite common used I think as most of you to run one country system and there are many tools libraries to encode programs in run one country systems and there are not so many tools to encode in star air so I think this is just a limitation as I have told the property of air is so that star it just works well only for virtual computations and the proof size is large especially when we compare it to slugs where the proof is constant size but the final part of slugs is really ingenious and maybe we can apply to some other primitives because mainly next one what fry shows us is that some function is a polynomial maybe we are able to apply the strong properties of fry to other relatively related problems and we try to do this with polynomial commitment this is a full definition and just what is going on proof as polynomial but he wants to he doesn't show this polynomial to us he just opens he just gets us the commitment to this polynomial so we as verify maybe sure that later this polynomial won't be changed but when verifier sees the commitments he only knows that the proof has some polynomial in mind and moreover the polynomial is of fixed degree the degree of polynomial is bounded by some parameter taken in the top face then at the latest steps of protocol we ask the proof to open the commited polynomial at some point and the verifier just opens the evaluation of the polynomial and he also send us some proof of correctness so that relate to my check that this open is related to the commited polynomial itself so it's quite simple next one what is this sigma, right? actually Phi is just polynomial itself we were speaking about polynomials or some fields I think we don't have enough time to cover the commitment which is just the main polynomial commitment schemes used in using our protocols but what is the main point here first of all by using some kind of fraction and the fraction is the main thing that Fry works with which shows that this function is not really original you can just keep this part this is the same technique which was applied in Fry which basically says it has a polynomial of Phi and if Phi has this polynomial at this point and this well if you subtract it then you will get to 0 at this point it's just the numerator and if it's a root if it's this numerator has a root at this point then you can divide it and get another polynomial which is just a little less so you can this is in relation to exploited by how what is the degree of the polynomial you can commit to you can be quite sure that your initial commitment was actually a polynomial up to a certain degree and after you do the division you cannot commit to the rational function unless you've broken some a little bit of crypto you could not do this and it means that later at the opening space you could also not face a proof and say this is my commitment it was the true value of this polynomial for which you committed at this point otherwise you have broken when then this is a separate problem yeah so the main thing that this parameter alpha is not should not be known neither for can be to prove or not for verifying and there are many zero law schemes such as and not mistaken sonic the only the only one thing they require the transfer setup ceremony is because they're based on key commitment and so if we're able to replace key commitment with something that requires only a transparent such out or any such out then those commitment schemes those zero schemes will be transparent by design this is just the main the main protocol of right based commitment I just want to explain what is going on first the commitment that commit face we want to be sure we as a verify want to be sure that the function and commitment is a polynomial and not some just random random set of values and the commitment here is again noracle as in fry so we run one instance of fry commit face to be sure that the function of the commitment is a polynomial and then we run another instance of fry open face but this time with respect to the most fraction and again we're trying to show that this is a polynomial I think we'll skip the proof this is the main idea why this will work but there is one important thing between the key commitment and the fry by commitment schemes as I told some minutes ago fry is unable to distinguish between polynomials and function which are close to them with respect to how many metrics and so if we if we are not sure that our function commit face is not even a polynomial which function will open later and the thing is this is our space or just a lot of polynomials this is our function for which the proof gives normal access to this is just just a circle of our function f and let them intersect at some polynomials the fact this our commitment scheme will only show that the opening the opening which will be done at open face will be the value of one of these polynomials at the intersection and there is one special case when delta is chosen to be the unique decoding radius parameter in this case if there are any intersection between our function and the space of polynomials of both degree the intersection will contain only one point and this particular case we just that the initial commitment of the proof was not to the function but to that one point of intersection but when we are trying to apply our commitment schemes to real protocols such as Plong for example we apply them to witness polynomials in Plong scheme they verify as a proof to open the value of a witness polynomial and we may think that any of the points of intersection are our available our witness polynomials it doesn't really matter but there is one important thing the next one this is the proof I think we'll skip it for now this is the relaxed version proof was for unique decoding radius and a relaxed version where the intersection contained more than one point and here we'll show that the opening is related to the one of polynomials in the intersection and another thing in Plong the case commitment is not used only to open the values of witness polynomials there the commitment is also used for speed up calculation of constraint polynomials what do I mean by constraint polynomial constraint polynomial encodes our problem this is a polynomial that is known for both the proof and the verifier but it is a very large degree and then the verifier ask the prover to send to send the opening of this constraint polynomial at some point because he doesn't want to do a huge amount or to relate polynomial of that degree and when we speak about case commitment we know that if commitment is received then the opening of the commitment is specific polynomial and we just check that the commitment is related to the constraint polynomial but in this case in this case there are several other polynomials to which to which the opens may be provided and when we speak about constraint polynomials this means that the prover may send the opening to another constraint polynomial that he sends the answer to another constraint system to another problem this is the difference between witness polynomials and constraint polynomials and so we call this problem evolution problem because there is no sticklet polynomial this is the polynomial which is known for both sides and for which the verifier wants to get the opening just to speed up to reduce his complexity and so on and the solution to the problem is the following ok this is our space again with some encodes now we just this is the top polynomial which is already a polynomial so this is inside our space then we check the parameter delta and there are some other polynomials with these delta radius but delta is fixed at the top phase the polynomial is also known for both parts of the state of protocol and so all other polynomials or the points in the circle are known to both sides in advance so we just take some point E1 with such property that the value of our fixed polynomial is different from the values of other polynomials in this circle we call this f and this will be f0 our constrict polynomial and the longest will be just 1 f2 and E1 is such a point such f0 of y1 is not equal to y of y1 for for every y greater we call that 1 that makes sense and this is the replacement of the search for such a value of z or sorry E1 E1 that will be z1 this is some kind of transparent setup this list may be huge decoding of all those polynomials may take a lot of time the search of this point is also some work but this is a setup we should be done only once and this is fully transparent setup and there is no need for some secret parameters done in key commitment and here here in previous protocols which just open schemes we divide by one line function and here I1 is just the point we took a setup phase and I2 is just the point which the verify is asked to open the value yet and ux is a function that is just interpolation of those points with respect to values z1 and z2 the setup phase the value of 0 at z1 the point which we try to open and the value the probability value of opening this is the only difference from polynomial any questions I think here I should summarize the difference it all comes to the problem of polynomial relationship everyone knows that if you want to satisfy if you have a polynomial of certain degree and this is very important the probability that it will have the root is negligible in a large enough way as a consequence because this point will exist well in principle we go all the way back to the start in principle let's say we can we have a way which we didn't have but there is a way how we can use this check around the point and here you are good everything here you know the degrees up front so you can just check you can make the check with enormous soundness with a small price if you do it as around the point and you can do this the problem is as Kostya said we worked with in the fry if you built it small you prophesied it large if you did, delta is larger for the same soundness you prophesied it smaller so you really want to work with a large delta in startups it's not the problem if you take a large delta let's give it you take a large delta so you have this in first section so in principle at the random point you can actually take any of those polynomials and pull the volume for this one of those at the random point and you can do it for everything which comes in this equation well up to some certain example and the best case how you can satisfy this equation at the random point as well for one polynomial you have this few options at the random point for another one you have this option like this set of options and you can pretty simply get a soundness error for this case it will depend on the average size for each polynomial like how many values you can feed from these intersections and the number of polynomials divided by field size it doesn't matter because even while you pull all those volumes you actually satisfy every constraint because constraints are known to the verifier as a setup space so in startup verifier you can literally check every constraint that this is white card limitations why you have to optimize your constraints you have to have them efficiently and beautiful in snarks which work with layered circuits like growth 16 everything which is R1CS plonk with another optimization sonic with another optimization you work with arbitrary circuits so you have to encode your problems somehow to have the efficient verifier otherwise verifier will do the same apply every constraint which is linear work or just to relate some large degree polynomial which is still linear work but if we do it naively we cannot use this just naively use this case for constraint polynomials because they are different, they are important and if you allow this case in principle let's say one of those points is your constraint polynomial but nothing stops the prover the malicious prover from competing all other constraint polynomials and maybe for one of those actual witness or a solution is trivial it's just set of zero so you cannot stop this you cannot predict it upfront but in naive commitment scheme you cannot allow this case because you cannot allow the prover to cheat with just not picking from some set of values but actually to pick in another problem so for a commitment scheme naively apply it you would have to limit the delta to this case and this delta is equal to roughly one half minus something this is not too large but it sounds as first some certain sum of supersize would be larger than if you would allow this case so now we have to eliminate well now we want to try to increase the delta so let's say we work with this case we need some kind of second check in this case we say that you are back to this equation well in this case what we have to do is we need to second boundary so we do the second boundary and set the time in principle since the problem is known to both prover and refire we can either do this trick with computing all of those polynomials which are at the intersection and like picking the the value in the field where they are all different so we can use this in reality we will not do this we will pick around the point this is a small number compared to the field itself so the chance that any of those two have the same value as around the point is still negligible so we can be quite sure that we pick well and prover cannot now change the problem and after this we are okay with this well back to this case because we know the problem now so our constraints are fixed as a set of space we are okay with prover having some freedom of choice from set of values as around the point but still at the end of the day you will not be able to satisfy some polynomial relationship well similar to start but in plonk it's different it's much simpler at the end of the day you still will be limited by parts now since this part is no longer a problem we actually got to the optimum we got to the optimum for the soundness or the practical which was for which we optimized initially so now we can have the larger delta potentially after the limit was on the right part and this will give us a smaller proof for the same sounds which we pick up from well this allows you to have full benefits of the product but for every proof system we just use and depends on polynomial the last thing now we have just as I said the polynomial commitment scheme for witness polynomials the relation scheme with transparent setup for constraint polynomials and it can be applied for some zero knowledge schemes which encode both the witness polynomials and the constraint polynomial and the witnesses and constraint polynomials and then just query their values at some point and check some polynomial relation between them and just use Schwarz-Ebel this is just exactly PONK where are there any familiar with PONK for those who are familiar for example the constraint polynomials for PONK those selective polynomials Q left Q right multiple multiplications and here just sending random values is for simulation of interaction but the final step of PONK protocol is to check some relation between polynomial set one FM which are constraint polynomials and fixed as a tough phase which encode our just problem and witness polynomial for example for simplicity our constraint will be just the other products in some size and here at final step we use Schwarz-Ebel if this relation holds if it doesn't hold at one point then with a well probability it will hold at the space so we just need to check this relation one point and for this we use the commitment scheme to open the values of witness polynomials at some point Z of our choice and use the relation scheme to open the values of constraint polynomials at the same point Z just to speed up the calculation at some point of view and then you just check the relation and that's it it seems like to be a full the last one it seems like we achieved our goal which is a system which is full transparent ok the setup is large but the search for the point Z but nevertheless it's transparent it succeeds it works in interactive or local proof model which uses just hashers and some theoretical information constructions it is quantum resistance it has quantum resistance so that's it any questions? you forget to mention the principle of recursive definitely there is when we are not bound by any parameters and trust setup we can we might take as many levels of regression as we need for our aims purposes it's not a proof system first of all we don't try to make a proof system it can be a fairly proof system which depends on polynomial commitment it just allows you to take in a proof system which depends on polynomial commitments and codes constraints or the problem itself and allows you to apply this to the team ok so how do we for the implementation how do we for the implementation you can take the proof I mean yeah you can take the a proof system that tells you the implementation and just is based on the fact that you can commit to the polynomial and open it as a random point especially it's a random point it's our reference system where you have in mind not trying to apply so just take long and replace the kit commitment which requires trust setup by new commitment and relations how does it impact that runtime we need to verification time is even below parent I mean should we what do you mean for runtime recognition or proving both for now for incomplete progress I think Zach I'm just twice slower but should be optimized to the same level yeah it should be the same speed because here for fry protocol you have to do low degree extensions so the polynomial which means that you just take the polynomial degree n and you have to calculate f of t of the size like 16n which is like which is a large share but it's only like now it's already less than 50% of the work the rest of the work is just actually a lot of caching caching is still not free it's fast but it's still not free so yeah it should be on par and for verification we will see what will be the master's model I mean now it's also freedom of choice if you want to be transparent plus recursive you can use this if you don't need recursion and I mean if you already have the trusted setup in terms of powers of tau for kid commitment you can just take those and if you're it's good for a problem you can use it now a freedom of choice I'm not even talking about exotic constructions where you can use transparent parts as a transparent one as a bottom level of recursion and then you can do the final one with kid commitment because now you don't have a problem with curve cycles for snarks at very friendly curves because here everything you want is a field and it's already required for a field as high to a DCT which is already a requirement for kind of everything for polynomial relationship and with the proof systems over a pair of parameters what do you mean transparent transparent means you don't need a trusted setup for kid commitments for kid commitments those values alpha should be unknown so we can treat it as an variable so you have to produce those somehow there are ways now we know there are ways you can produce this and it's still universal it's not per circuit trusted setup because it's only kid commitment universal but still it just requires you to pass out it also limits you to a circuit size of the circuit the kid setup it's linear in the size of the circuit so if you want to do a huge circuit for all of these parameters which with the private polynomial commitment it's a constant size setup yeah well just for this I will see you as the most quick as domains well now you have a freedom of choice of the fields so you can get your multiple domain of the enormous size up to some limit of course but here there is there is a practical limit on the size of the kid commitment for polynomial to which you can commit both in terms of the storage of the parameter and if you're in a framework of sample and other relationships usually you're also bounded by the multiple give some group size like for a number of fruits of the field by the way you might replace kid commitment in long life systems by drug commitment put to the same thing because it's commitment scheme so it's probably depends on the commitment scheme if you take one and it relates with another it's just a freedom of trace now it just should be succinct if you want to work at least with a serious smart contract as a rate fire yes I can can you just give us an idea of like what the group sizes would look like yeah for the original commitment scheme where we wouldn't allow where we would require synonyms I think I estimated for 228 polynomial for some like 68 kilobytes without some strong optimizations for relax very much for unique decoding radius restrictions should be 68 in the worst case without optimization I cannot give a number with optimization because I have to try 228 yeah for in the case if we can use this trick with evaluation scheme for setup polynomials it should be reduced like 5 to 6 times it should be in 20 kilobytes limit without optimization by the way maybe there was even no need to find such a point z for inflation schemes maybe we may just say that this is kind of the setup is the probability argument here and this side is what allows it to be recurrent for recursion the only thing that matters is the circuit size of your last verifier because it's the only stuff that will be publicly verified for everything else you find your group sizes in a private business so while it's a verifier circuit will depend on the choice of the hash functions like how you make the transcript for example procedure but I estimated like 223 to 4 polynomial so it should be even smaller well, recursion gives you a straight up between the size and how much part you want to make okay I think it's just time but because it was a long session you should have a larger break but yeah thank you