 My goal on this talk is to convince you that this is not MoonMath. This is also developing, this is also tooling for developers. So today I'm going to present Zocrates. I'm going to give a quick introduction to what it is for those of you who don't know it and then I will present a few things that we've been working on and that I'm excited to share. So what is Zocrates? If you want to program Snarks today, there's different tools that you can use. Maybe some of you have heard of Zocrates and all those tools have different trade-offs in terms of power that they give to developers and how easy they are to use and how high-level or low-level they are. Zocrates has a positioning as a very high-level language. So when you write Zocrates it looks very similar to if you were writing Rust or Python or something like that. It compels to R1 CS. So this is the kind of Snarks that are easy to verify today on Ethereum so you can verify directly on your smart contracts. It uses modular back-end implementations which means that Zocrates does not develop a proof system itself but rather uses back-end implementations from the community for different proof systems and just targets that and uses those great implementations. In order to have such a high-level language but still have something that's really efficient at the low-level, Zocrates makes use of optimizations in the compiler which is something that's also different from some other tools which give you a very low-level access to what's going on on the constraint level but don't really apply a lot of optimizations. So where does it run? So the part of Zocrates which allows you to compile the compiler. You can run it natively on your machine. We have a remix plug-in as well and we also now have a playground that you can access at play.zocrate.es which I encourage you to start. That's probably the easiest way to get started with Zocrates and just write some simple programs. In terms of the scheme so there's back-end implementations that we support. We currently have support for a Graph 16 which is some of you may know the snark that is the smallest and the fastest to verify. For GM17 which is an evolution of Graph 16, let's say, for Morlin which is a universal snark which I'll touch on a bit later and for Nova which I'll also go into which is a new type of exciting snark that enables new use cases. For proving so the particular implementations we rely on Bellman, ARC, Bellperson and also snark.js. I'll also go into a bit more detail on that integration. In terms of the verifier we have you can verify in JavaScript or in the CLI and for some of the schemes that are compatible with the EVM we generate a verifier contract so that you can directly send your proofs to your smart contracts and verify them and link them to your DAB. To give a bit of a hello word of socrates here's a program which takes two private inputs A and B and one public input C and then asserts that A times B is equals to C. So technically this is a program with which you can prove that you know the factorization of a number. The way you would run that is you would first compile that program which would turn it into this low-level representation that's let's say R1CS. Then you could execute that program with an input here three three and nine then you could generate a proof using a particular proof system and you could also generate a verifier that you can then deploy on the EVM and then send your proof to and then convince basically convince the network that you actually knew this factorization. For an example of something that's a little bit more advanced to make the point that this is actually a high-level language here's the implementation in Socrates of the SHA256 function. So just to give you a few things that are expected in a high-level language we have a module import system you can import constants as well we have for loops of course we have function calls and also one exciting thing that we added kind of recently and maybe some of you who use Rust are familiar with that feature is the notion of constant generics so in this case the SHA256 function is a hash function that can take an arbitrary number of bytes as input however in circuits all the inputs are always static so all of these the size of the input will always have to be known at compile time but this is something that we do you can still define this as something that is generic over K and then have a number of rounds of this SHA round function but then when you compile your program and actually use the function the SHA256 this this variable K is going to have a concrete value which will then compile to the exact number of blocks that you're hashing and if you're trying to do something that's dynamic calling this function on something that's not whose size is not know that compile time is just going to fail at compile time so we can have a very high dramatic implementation of SHA256 of course there's more complexity in the SHA round but if you look at the code it's almost line-to-line equivalent to an implementation that you would see on Wikipedia for example pseudocode implementation now I'm going to go a bit in a bit more on a detail of the SHA256 implementation so inside the SHA round this expression needs to be calculated a lot of times it basically takes three unsigned integers A, B and C of 32 bits and calculates A and B or A and C or B and C and this is something that you can just write like this in Zocates today compile and it will translate that to a number of constraints at the low level however if you do the math and look at the let's say the first bit of A the first bit of B and the first bit of C you can see that if you consider them as numbers so as field elements they actually verify so the result actually verifies those two equations so you define or you constrain a new variable BC to be equals to equal to B times C and here I just want to clarify that these constraints are the low level constraints that we deal with so the R1CN constraints they look like this they have one side which is linear so in this case it's only one variable but you can have a sum of different variables one side which is quadratic right so this first one just defines BC or like constraints a new variable BC to be equal to B times C and then introduces this res variable here which is our result for the first bit and then constraints it in this way and this is actually more efficient that what the compiler would generate itself because here we have more knowledge of what we're actually trying to do than the compiler does on the flip side here what we're doing is that we're introducing a new variable res and then constraining it with this with this equation however when you're dealing with such low-level details in strikes it's really easy to introduce new variables but fail to constrain them sufficiently so that you actually introduce vulnerabilities in your circuits so this is something that Zorkatis does not expose at the moment to the developer which means that you can all do this one which is less efficient if you look at more lower level tools they let you use these things but then it's at your own risk and then it's likely that you're going to introduce vulnerabilities so what I want to showcase today is the addition to Zorkatis a way to actually encode this thing and have the performance from this thing in the context of the higher-level language yeah so I have a video now if you could start the video right so this is using the actually the Zorkatis playground so here I defined the default function which I call the default function where I just I'll just use the compiler to generate the constraints for this this expression here I'm going to create an entry point for this program so taking also a BNC returning a U32 and I'm just gonna call the default function and see how many constraints are created in the process so I compile and then I get the result 260 constraints here and now what I'm going to do is to define another version of this function which hopefully will reduce a number of constraints by leveraging this lower-level implementation so I call it hand optimized has the same signature a BNC also return a U32 as I described earlier I want to operate on each individual bit of those U32 so the first thing I'm going to do is turning those U32 into an array of Booleans we have sort of a magic tool in our standard library to do that which is called a cast function and which can do this conversion for you and here I want to point out that this is actually free because the U32 type is actually represented as 32 bits under the hood so we're not paying any constraints for this so just cast the three of them the next thing I'm going to do is introduce a new array of Booleans for the result here one relatively new feature we have is that everything is immutable by default so here I have to declare this variable mutable if I want to be able to modify it after okay so now that I have all the bits I can start a for loop so for I from 0 to 32 and here I'm going to consider the I bit of a BNC and if we want to have access to those low-level constraints we need to reason at the level of field elements so we need to turn those Booleans into field elements which is a lower-level representation for this I call this bolt of field function which I'm going to define in a second I do the same for BNC and here I'm going to define this function and again this is something that's going to be free that's not going to create any constraints for the same reason as earlier because a Boolean is actually represented as a field element at the low level it's just that it can only be the value 0 for false and 1 for true so I can just return that using a ternary expression okay so now I have ABC as as field elements now I have this first constraint which was BC equals B times C and actually this constraint I can already define in in the high-level language because I'm doing both constraining and assigning BC so I do that so I have this first constraint is done then I declare this result all again mutable and this is where the interesting new thing happens I have this assembly block that I can create and here I have access to two special kinds of statements the first one is going to be just an assignment so I introduce I just assign the value BC minus this other expression to res but I did not create any constraints it's purely just an assignment so this has no influence on the constraint system and then I want to be able to use this this res value later but to use it I need to make sure that it's really constrained in the constraint system so after this is I did this assignment I add actually constrained which which makes sure that everything is set in stone so here BC minus res equals this multiplication you can see that again this can be any expression here when I assign but this has to be linear equals quadratic right so here this is this is working okay so now I have everything set up and I'm convinced that res has a result that I need the next thing that I want to do is that I actually want to have a Boolean I'm going to go the other way around and reconstruct my results but I want to have a Boolean and I want to go from a field element to a Boolean and here I want to make it really clear that this is an unsafe a potentially unsafe operation because this res value I know that it's zero zero one because I wrote this but in theory it could be any value so I need to be really careful when I do this but I can force the creation of this Boolean with this value finally I reconstruct the year 32 value from the Boolean array using this cost function again okay and I changed my entry point to use the hand-optimized version so we were at 260 an hour 164 so we made quite quite a big dent in our constraint count so what's the idea here we want to keep all the guarantees that we have from our higher level language we have types we have things we know that Booleans can be only zero one if you don't write assembled box but at the same time we want to have access to this low-level thing and here I think there's actually a parallel with Rust in a way which says okay we have a compiler that's like we restrict all these things but we still want to be able to do all these lower all things we want to disable a bunch of checks and we have a similar approach where as a developer you would write most of your program in safe so could say but then for the few parts that need the extra performance you can write them in those asm blocks and try to make those blocks as small as possible so that when you need to review the code or make sure that things are not unconstrained you know exactly where to look at and these things are relatively small one side effect of this also for us as a compiler team is that we can use this ourselves to reduce the complexity of the compiler this particular operation in the chat to five six algorithm we in the compiler we have a special case which detects whenever we're doing this and uses this exact constraints but now potentially using assembly blocks what we can do is rewriting some of the internals of the compiler to actually use these things which then reduce the size of the compiler code base and makes it easier to reason about and audit the next thing that I want to talk about which is unrelated to this is the fact that so criticism I compatible with sort of with snark.js so snark.js is a JavaScript library for snark.js from the IDent3 team from the circum team which has a bunch of tools allowing you to work with snark.js in a JavaScript context what we have now available today is if you start from your your Zockerty's program and you run compilation currently it returns an output which is the low-level Zockerty's representation but you can also optionally return an .r1cs file which is the format that is accepted by snark.js and then if you want to execute your program you take your input and and the program itself and you can create a witness file which is also compatible with snark.js and from this point on you're in snark.js so you can do whatever snark.js and elbows you to do using different proof systems using a powers of tau ceremony and run your verifier in the browser etc etc so any tooling that's compatible with these formats can now be used with Zockerty's as well another topic that I wanted to touch on quickly is the is incrementally incrementally verifiable computation so this is a scary word but it's basically the idea that if you have a computation which you can split in steps which are basically the same function being run over and over again on the same on on the state and updating the state then you can actually use recursive snarks to prove this computation incrementally so as opposed to Zockerty's currently to other proof systems where you can think of them as more like an ASIC so your circuit is like this ASIC that's really set in stone and you can do only one computation and everything is bounded and static in this case you can have a computation where you run one round of the computation and then another one later and you can prove the execution of that computation at each step so some use cases for that are succinctly verifiable blockchain so maybe some of you have heard of the Mina blockchain where basically you use this to have a blockchain where each time you get a new block you you verify the previous block as well as the transaction of this block and it creates a snark and then you have this kind of recursive verification another use case is VDFs because some of the VDFs actually have this structure of having some state which is then to which you apply a function recursively and then being able to have a snark of these computation can be really useful so what we're working on now and we're actually pretty close to having this ready in production is integrating a proof system called NOVA which does exactly this and the way that I'm not going to go too much in detail here but the way that the API would work for developers today is that you write a function and the only restriction here is that the input needs to be the type the input type needs to be the output type because you want the recursive aspect to to work and then you can compile this function to a specific curve so you need to use this palace curve because under the hood NOVA uses cycles of electric curves so this doesn't work for any curve but we support the curves that enable that and then you can basically prove a number of steps starting from a given a given state and even after running this you could run it again starting from the last state that you had so this hope this opens a lot of use cases for people who want to experiment with incrementable incremental verifiable computation so I invite you to test that it's it's going to be out very soon okay since I only have two minutes left I'll just go very quickly through some things that way that we added recently so there's the powers of towel ceremony so if you look at graph 16 for example it's it requires a trusted setup and the powers of towel ceremony enables you to do that using MPC so you can do that directly with this octa-cli and also in in zocatus.js actually we also have support for lock statements that you can where you can inspect certain values of your code at runtime we have support for the Marlin proof system that I mentioned earlier which is a universal setup which means that you can do one set up and then use it for different circuits we also change a lot of the syntax following some feedback from from members of the community and finally yeah this this playground which is now accessible which I invite you to check out okay I think that's that's all I had to share so thanks for listening if there's any questions yeah I'm happy to take questions can we use the new NOVA support to write another but shortest UKVM I think in theory yes I'm not sure it would be the most efficient but they were already actually a long time ago VMs that were built on top of recursive snarks but they were using recursion at each cycle of the CPU which was quite inefficient but maybe there's a new take that could be that you can have on that so that would be one cycle for each opcode that that's that was the case in those project ten years ago it's a project called tiny ram where that's what they did they basically took the state of the CPU and just run each time one one opcode and then have a recursive snark recursive snark each time but there's there maybe there's other approaches to leverage this proof system so instead of making users to write very optimized code have you put effort on kind of making the compiler kind of figure out what the user wants and then optimize the circuit based on that so are you saying that okay is your question that we should instead of having low level code have the outside of have the compiler detect when it can do those optimizations right I mean both but like you were only talking about the case right so this is something that as I mentioned this is something that we do currently where we identify some of those use cases I mean some of those cases and just have them in the compiler but it's just really hard to cover them all because it's really use case dependent here this is just for one part of the shot to 56 function but there's so many different things that you could optimize so I think it's actually a good idea to open that to developers but in a way that that's quite separated from the rest of the high level code