 At the start, I have one question. Who of you has heard about CK Snarks? OK, so that's a majority. Who of you has a rough idea what they actually mean and do? OK, so yeah, that's good for a start. So my name is Jacob. I'm a PhD student here at TU Berlin in the Information Systems Engineering Department. And with me today is Stefan, who will head me out with the demo because in my browser, the remix is not working as well as in his browser. And I'm currently working on a compiler that compiles from a high-level language to computations. You can do a zero-knowledge proof over, basically, and then verify that proof on the blockchain. So to give a little context, I've prepared a couple of slides. I'm not too well prepared because I just found out that I'll be giving this talk in the afternoon today. So yeah, so what's the basic idea, what's the basic setting we're talking about? I think it's delegated computation. What we have at the moment with blockchain systems is what we see on the left side. So we usually have only on-chain processing. There's transaction being sent to the network, and then it gets validated by every single node in the network. And there are several proposals to change that with Blasmo and Christian. We'll talk about later and other sharding techniques. But there's also another idea that we do no longer do all the computation on-chain, but do part of it off-chain. So one proposal that did that was Trubit, for example. So there was this idea of a computation marketplace, and you off-chain computations. And then you publish the result. And in the Trubit case, fraud can be detected. And people who provide results that are invalid will be punished. So that's one way of dealing with the problem that when you don't actually execute computation on-chain, you cannot be certain that it's actually correct. And another approach of doing the same thing is that you not only do the computation off-chain, but during the computation, you create a proof. And that proof proves that the computation was done correctly. And then all you have to do is take the result on-chain and validate that the proof is valid on-chain. So instead of doing the whole computation on the block chain, you do it off-chain and only validate that it's actually okay on-chain. And that's the setting we're talking about here. Okay, so Socrates, we call the tool Socrates, CK, so knowledge, and the ideas that the famous saying from Socrates, I know that I know nothing here. It's I know that I show nothing, bit of a bad pun maybe, but the idea is that when I do the computation off-chain, I can use private information in that computation without ever leaking that to the public. And that's the serial knowledge property of these proofs. Okay, so what is Socrates? I'll give you a demo in a minute, and it's a tool, it takes a higher level language. It's not too super powerful just because it is limited. It's not Turing complete because of the underlying abstractions needed to do CK-snarks on. But it's a high level language, it can be, yeah, it's very understandable and quite simple. And then we have a compiler which transforms that in a representation you can do CK-snarks with basically. So we have that high level language, and then we compile these statements into a set of conditions. These set of conditions, they have a special form and they're called a rank one constraint system. So we just have a huge list of conditions. And then you can transform that to quadratic arithmetic program, which is basically equal to a tree with only additions and multiplications on the nodes in the tree. And from that, using the serial knowledge work that has been developed, you can generate a prover, a verifier and use these to actually do computations and with the computations, yeah, find a proof. Okay, so here's what the process looks like with other slide as well. We have code, we compile that to a rank one constraint system that gets compiled to a quadratic arithmetic program. And then we generate a CK-snark based on that representation. So this part is basically well understood, I would say, and is covered in the library Lipsnark, which is publicly available. It's used by the CK-snark as well. And this part is like the main contribution in the compiler that we take the high level language and transform it to the set of conditions. And here we have a little code sample at the bottom, how such a program could look like. We have positive integer variables. And here we check some conditions depending on X and do computations based on these outputs. Okay, so that's the general setting and how it looks like. And now I would like to move on to the demo, but maybe we'll have time for quick questions regarding the whole process in CK-snark if that wasn't clear from my introduction what the basic idea was. So any questions at this point or do you first want to see the demo? Okay. Which acronym? CK-snark. Oh, serial knowledge succinct non-interactive argument of knowledge. Let me take the very helpful question. Thank you. So yeah, these proofs, they have several properties. And the first proofs that were able to verify or prove that computations were correct were interactive proofs that meant between the verifier and the prover, several rounds of communication were necessary to actually reach a certain level of, yeah, certainty that the computation was actually correct. And these CK-snark constructions, they don't have that interactivity property. So they're non-interactive and that's part of the name with non-interactive. The succinctness property just means the proofs are short. So that means they're cheap to send around the network. They can be received, yeah, it's just a short format. And the certain knowledge property is also a nice property. And it basically comes for free in the construction. That means when I do my computation off-chain, I can use data that will later be publicly visible, but I can also use private data, for example, whatever my ID or something, use it in the computation to arrive at an outcome. And that outcome can then be validated or verified without me leaking the private information. So I keep my private information to myself and can still prove that I have that private information. So one example, a good example maybe is I have an ID and there is a hash of my ID on the blockchain. And now I can prove that I have that ID by hashing it off-chain and providing certain knowledge proof that I have the value, that means the idea that hashes to the value of stored on-chain without ever revealing my ID information. So I keep the sense of information completely to myself and can still make statements about it and verify the statement. So that's a CK part of CK's logs. Okay. So I have a question. Yeah. I know the computation requires much more amount of work. The verification is very cheap and that's a cool property because we use that here and we do the verification directly in solidity that means in the EVM. And if that part was expensive, that would not be possible. So it's just several elliptic curve operations, couple of pairings, additions, scalar multiplications and actual multiplications. But it's a couple of operations. It's cheap to do. I think at the moment it's about half the gas limit it takes to do one verification, right? I think round about that. So it's still expensive, but compared to the work you have to do off-chain to generate the proof it's cheap and it's actually doable today on the theorem blockchain. Yes. What is that? Okay. Is there a goal? Yeah. So one goal is privacy. That was in the example I gave you that I can prove things on information that I do not reveal. And we don't have that at the moment. You need to make all the information public and the blockchain will then do computations and I do reaffircize another thing. I don't see that at the moment because proof generation is still quite expensive but what you can also do, the special property of DC case narcosis that the verification is independent, the complexity of the verification is independent of the complexity of the computation you're proving. So that means no matter how complex your off-chain operations are, the proof always costs you the same. So at some point there's a break-even point where you're doing a very complex computation off-chain and doing it on-chain where the off-chain part becomes much cheaper than doing the same computation on-chain and you can do operations that would exceed the gas limit that's off a block and still verify it on-chain so you also enhance capabilities of the blockchain. Yes? But does verification depend on the amount? Amount, it depends on the number of inputs but it does not depend on the length or complexity of the computation whatsoever. Okay, you'll see that in the demo. It's not too powerful but you can do some things. I'm currently still implementing a hash function. It should be possible. It's just not done yet so that would be very nice to have in the future. At the moment you can just do condition checking, loops, function calls and arithmetic operations on field elements for people. It's the variables we use. They're elements of prime fields but it's easier to just imagine them as positive integers. So that works for most operations unless there's overflows but I can't cover it in that detail. Okay, let me show you something. Oh, I'm in the room console. Okay, so what I have here is I just show you the file. I go to the examples directory and show you very simple program which is a simple add. That's what it looks like. You have a main function, takes two parameters and it returns the sum of both. Okay, so that's our very basic example. You can do much more complex but I'll just show you what you can do with that. And the tool provides a command line interface that allows you to compile such code to compute witnesses. That means to derive solutions for the constraint system and with that derive solutions for the program in the first place. You can then export the verification code to Solidity Smart Contract. So you can actually verify your computation you specified on the theorem blockchain. And what the synodus proofs require at the moment is a trusted setup phase. Okay, but there's ways around that. There's several efforts and communities especially by the Ccash people to find a way around that so I won't cover this now. So basically what you can do is specify your program. You compile it in a set of conditions. You find a solution with that tool for your conditions and then you can compute proof. And also you can compute Solidity source code that you publish to the network and with that source code you can verify the computations on chain and solutions of your constraint system. So let me compile this code. Okay, that's what the compile program looks like. It looks exactly the same because all these conditions are already right format. I can show you a more complex code sample. For example, choose K. So this program computes N choose K, the binomial coefficient and after compiling that takes a while, takes a long while now, longer than usually does. Oh yeah, here it is. So you can see it's just tons and tons of conditions. So yeah, our simple example works for now but usually it's huge constraints in the sense they come from simple programs already. Okay, and what I can then do, I'll first compile the app example again and then I will compute a witness. That means a solution for that program and also a Solidity smart contract which I can use to verify that computation on chain. I do that using the shortcut operation which does set up witness computation and Solidity code export in one step. I provide two arguments, let's say one and one I want to calculate the sum of A and B. I do that and what I get, let's go up. Okay, first I get a witness. That means I get a satisfying set of variables that satisfy my program. So that means A is one and B is one. I gave that to the compiler and the output is two so it computed that simple sum correctly but what it also did is it generated a verification key in Solidity compliant format. I can use that, paste it in the template and deploy that to the network and then I can verify proofs with that that I can also generate with this tool and down here we also have a proof which I can then use to validate stuff from Solidity. Actually today we validated the first proof on the Robston testnet and Stephanie will briefly show us this process because it works better on this machine. So the computation, what we validate on chain now is that one plus one is two. But it could be an arbitrarily complex computation and verification but always would the same. Exactly, at this point we don't use private data. What we could also have is that we say or that we prove that we have two numbers the sum of which equals two and then we would only provide the two and the proof and still we would be sure that the person who generates a battle proof was in possession of two numbers, for example, one and one that satisfy the constraint system. Yes. Yeah, you send the inputs that means in that case one, one and two and then you send yeah, a proof and that's a couple of elliptic curve points essentially that are then used to check conditions on and what they show, what the validation logic of verification logic checks is that you actually used the correct code basically, the correct program and you did not just use another program to compute the result. Exactly. And also the proof for that the answer is correct and that they have actually used the correct program code to generate that answer. So you can't lie about the source code you use to arrive at that answer. And how much gas did you say that cost? It's at the moment half the gas limit, so. A loss of 1.9 million for one verification. That's fixed because you always do the same verification steps no matter how complex your option computation is. It only depends slightly on the number of input variables so the gas cost actually grows a bit with the number of input variables you have but generally the large part of the gas cost remains constant. Well, someone would have to arrive at better CK snark so would have to make the verification operations cheaper so or more efficient on hardware so that the theorem foundation could reduce the gas cost for these operations but what it is it's elliptic curve operations and they are costly and there's several required so I think either you have new CK snarks that allow you to do less costly operations or you have to become more efficient but I don't see that at the moment. But you could have computations off-chain that you couldn't even fit in a block so in that case you would actually save gas. So it depends on use case and also of course there's a price for privacy that at the moment you don't have privacy so you cannot make statements about your private data and you can do that of course at the moment it's expensive but it's a thing you can't do without it. Okay, let's continue with the demo. Okay, so we deployed that contract to the Robson test net maybe you can show the source code so that's what the solidity verification code looks like there's also some unnecessary testing code in there because we just stole that from Christian and then made some modifications and the verified extension. Yeah, so that's the function you call when you want to provide a proof and have the proof verified so you call a verified transaction and then you provide a number of elliptic curve points these are the A and AP and they're always two large numbers basically that make up one point in the case of the one group we're using and in the case of the other group it's even four points but it doesn't really matter you provide a number of points and then you provide input variables these are the parameters we gave to the compiler so in our case one, one, two and then the conditions are checked on chain. Okay, now we want to actually do the transaction now and check whether one plus one is actually two. What I didn't mention yet, the CK Snarks they're probabilistic proofs so we can only prove with a very high certainty that one plus one is two. We can be 100% sure. We're using a blockchain and we can't be marked essentially anyway. Yeah. Very low probability. All right, here you see that's the parameters here so these are these elliptic curve points that come out of the CK Snarks stuff that we just need to make sure that the correct program was used and that nobody cheated with the program and here are our input parameters at the very end and that's our input we now take and pass to the solidity verify transaction function. So we do that from remix here and then use metamask to send it to the Robson node. It takes a while but now, yeah, that's just for confirmation. It pays off, yes. And then hopefully we can see the ether scan after it has been mined in a minute and we can actually see whether the validation has succeeded or failed. It's not because you want to use a testnet and not the neonnet. The reason is that this is not yet possible on the neonnet. So this pairing operation should only be added with the Byzantium hard fork which will happen next week, so. Yeah, so wait until next week until you use it on the mainnet. No, I also have to say that at the moment it's an early prototype. It's not secure, not well tested. We're just showing you first results and I would not use it for any production use cases at the moment. Definitely not. Okay, and here you can see we actually have a transaction and it triggered an event and it said that the transaction successfully verified and because we have the inputs one, one and two, we now know that it's most likely, one and one is most likely to. So that's it with our short demo. I think it was a lot. I could not cover the low level details because they're actually quite complex. But now we're open to any questions. So 1.7 million guests. Yeah, quite some guests. It's pre-compiled contracts. So essentially it's not really machine codes but like machine codes, it's things you cannot do in the EVM that or if they would be too costly to do in the EVM that now have a direct implementation and these operations can be called from EVM. So you can think of it as if it was a new opcode added to the EVM and that can do pairing operations. That's a bilinear map on elliptic curves but it's needed for the verification and you can do a multiplication of elliptic curve points. So that's basically the pre-compiled Christian road that are needed to do the verification. So from next week on, the verification of CKs and arcs will be possible in the mainnet. And it's only elliptic curve operations we need. Yes. The common reference string or trusted setup you use is from, is it the same thing as in CKs? Well, in the actual CKs setup phase they had six people sitting in the room for like a day or something and copying DVDs and passing them around. So no, that's not what we're using. At the moment, we're actually using for this prototype a trusted setup phase. That's why I also say don't use it in production because you can only trust your own bruise basically or bruise by people you trust anyway. So there's not much point in that but there is efforts of creating distributed setup phases that are more efficient, skatable than the original distributed protocol that the CKs guys used during their setup phase. So Stefan's currently looking at options of bringing that to Ethereum. So you could actually do the setup off-chain but synchronized via smart contracts so that there could be a setup of a number of people who then can be sure that if they themselves were honest, then the bruise can never be faked or anything. So that's ongoing research I think and also the CKs guys reached out and said they have a new protocol, it's not published yet so we'll have to see where that goes. I hope there will be more efficient distributed setups in the future and then that tool will greatly benefit. Yes, that you do that with multiple people and there is some secret data being generated during this process and as long as one single person in this group deletes their own secret data of this generation, everyone generates secret data independent of each other as long as one single person deletes their data then the process works. So they were actually, with the CKs trustee set they were actually not sitting in the same room. They were distributed all around the globe or at least they tried to be in different location and there's an interesting article by Peter Todd who participated in the setup and he was actually driving around the west coast of the United States never staying at the same place for more than an hour and recording everything so that nobody can bug his devices and yeah, that's quite interesting. There's a video too, there's a video of them like doing some of that stuff so destroying the hard drives might help. Well yeah right, the DVD thing was just so you could do the computation and the networking on two different computers. Yes sir, yeah. I don't know what recursive CK snogs are. I'm not aware, are you aware? I'm not 100% sure but I would say that a recursive CK snog is a CK snog that verifies another CK snog. I mean a CK snog is just a computation itself, right? So you can use CK snogs to verify CK snogs and because the time needed, the complexity needed to do the verification does not depend on the actual computation. You can do this bootstrapping process and then yeah, verify arbitrarily complex data by doing that recursively. Wouldn't it help to let the CK snogs? I mean would generate and... Sorry. You were talking about the data which should be destroyed. No, no, just the computation you're performing. Sorry. And if we just say data, it was actually computation. Yeah so at the moment the setup phase was just local and then we actually, I didn't implement that myself, we used library for that so we hope that the library forgets or destroys the data or we can look at the source code that actually seems to do it but if you have compromised device or there's still a risk so distributed setup phase is definitely needed also so that I can convince others that the setup process has been performed correctly and they can trust the proofs I generate. This tool, no not yet because the CLI, I haven't implemented all operations completely but I plan to have a published version at DEFCON so it's like half a month. That's cool. Yeah. So ideally the tool works perfectly then and has a hash function implemented but no promises, no more questions then I'll pass on to Christian.