 Okay, so we moved all to the Best Take-Award and there's just the Best Take-Awards. So this year there were three Best Take-Awards here. The first one is on Simple Proofs of Sequential Work by Brandon Holm and Kisztor Pietertach. And Kisztor will give the coffee. Okay, welcome. So this is joint work with Bram Cohen, who is also somewhere here. So we can ask him a few questions. So I will tell you something about a very simple result, Simple Proofs of Sequential Work. So here is the outline of the talk. I first will kind of define and motivate the problem, or just define the problem that you look at the object, you look at it in the Sequential Work. I will give you an actual sketch, or actually give you an entire discussion and almost the entire proof. And if there is time, I will also kind of tell you why we look into this problem in the first place, it will be sustainable logic. So what are the Proofs of Sequential Work? So let me make it deeper. Like, you know, in the mid-90s, Revis Charmier and Binder initialized a field that's called Timely's cryptography and they constructed this object that's called the Timely puzzle. And it's so simple that I can show you here what this thing was. So not surprising, it's something with RSA. The positive is simply RSA model is n, so around the middle of an x, and so the time around t. Think about t as t4, for example. And the solution to this positive is to find the x to the 2 to the t multiple n. Now, the puzzle generator, you know the factorization of n, can just compute this solution with two explanations, whereas it's conjectured that if you don't know the factorization, you just get n. You have to do these sequential computations or these queries on top of x to arrive at this solution of the puzzle. And if you kind of wait sequential computation with computation time, you have this kind of a way to send the message to the future, right? Because it's really conjectured that it cannot speed up this computation even if it masses parallelism. Not as massive as a factorizer, but, you know, massive. Okay. So then, you know, nothing happened in this field of Timely's crypto until, like, 2013, when a moody moron like that turned out to be this new object proof of the publicly verified proof of sequential work. Let me show you how to compare what this object does compared to the Timely puzzles. First of all, it's one of these missions. We don't want to send messages to the future. We just want the proof system or a proof or kind of proof that it did some sequential computation. That's it. What was the motivation for this? The one application it had back then was non-interactive timestamps. So I'm on an island and proof is not equal to energy. So then, you know, I hashed this document and I start this proof of the non-interactive proof of sequential work and 20 years later, when, you know, I'm safe, P is not equal to energy. It's no. But I can prove to you that, you know, I had it many years ago. There is this sequential computation that didn't go all that. Okay. So it's an equal functionality, but on the other hand, they also kind of didn't require this weird non-standard right assumption on error state type assumption, but they're really interested in the rather moragic model. Or if you look at the standard model assumption, they kind of showed that it kind of lists with, you know, condition-resistant hash functions and some other not quite standard assumptions from the sequential hash functions that I would initially. So a much bigger assumption is another point that is very important for us, and that also emphasizes tidal ways of publicly verifiable social sequential work. Therefore, the problem is public home. So everybody can verify the proof is incorrectly done, whereas if you want to do this social sequential work based on this tidal problem, there is no way in how to do it because you need the factorization to verify the proof. But if the factorization can also kind of solve the problem faster, you know. So it's publicly verifiable. So I'll just prove this in defined as very simple. This is a verified proofer. The verifier assigns some public right of course, guys. That's called the statement. He sends this over to the proofer together with some time parameter like 40. The proofer computes the proof and sends it back to the verifier in front of some verification algorithm except for the gens. So how is complete the sense of security defined as sound is defined on this thing. So I will only tell you how it's done in the final model because it's really simplifying things. So complete the synthesis the proofer should be able to verify to compute that accept and improve just making T-seq or T-purist around the moron page. That's quite a good point about that. That's always great. And security for sound is defined as follows. If you're challenged to the run I shouldn't use Greek like this. If you're challenged to the run in kind then you should not be able to come up and accept and improve top-prime unless you make almost T-sequential queries. Even if you can make queries parallel by building queries parallel to the run moron you should still make almost T-sequentials. Okay. So yeah, so why is this a proof of sequential work or how we might verify the layout because for no euridicious this is kind of more descriptive of what this thing does. Again you can equate sequential computation with time and as a past this is kind of a proof so in any way it's a proof this is kind of a proof that we have spent time on top after receiving the statement kind. Other issues with the original construction of harmony and howl the first to hear are kind of a result of the fact that it's a fancy combinatorial object that we've heard about this morning already namely death row of scraps. So one problem is this space complexity of the prouder. The prouder not only has time to prove he also needs space which can be massive. Another problem is that because this is death row of scraps it's kind of a parameter it's not very clear what a parameter should be are they not very good then and the third problem that is only a problem for our application that I will mention later but it was not a problem for them is unique so once the prouder really spends key sequential time to come out of the prouder he can't generate a million different proofs except the proofs that you know so it's kind of not there is still canonical representation that this was not a problem for their application to not in fact time-stabbing but for us so new construction that I will show you solves problem one and two of me and first of all in times of space you can break space if you're too linear so for example if it is 2 to 42 that's like 10 rows of magnitude instead of petalite the prouder can only take you up a significant improvement and and I will show you the construction is super simple no need in constants no nothing and finally the uniqueness problem is something that you really would like to have solved and it's like a really nice problem so kind of come up in such a unique perfectly verified proofs without using I.O. or smart like basic objects or less as simple as that okay so construction I use three basic concepts so the first problem I will only use it to explain the energy construction and again you already heard about these things the dephorobus graph so the dephorobus graph in the back there is nothing to graph and we say that that is the dephorobus if you're following if we're moving any nodes from the back there remains a path of magnitude and they claim that this particular back here is 2 to 3 dephorobus so for example if you remove those nodes there is still a path left and you can know for this example you can move force and check that this is true whatever it is if I say dephorobus graph I mean just the graph dephorobus graph is very high in ENT you think of ENT being like literally like 0 long then other concept is graph labeling so if you give me that same graph and some hash function h I define the labeling of the vertices as follows so the label of the source is simply the hash of whatever some dummy element and more generally the label of any node is defined as the hash of the labels of the paths right if you give me that I'm just going to give you time you know then you really know what nodes is labeled and the third the third concept that I'd like to introduce is the sequentiality so and the fact that there are morales of sequentiality so assume you know there's this random morale and there may be some adversary way two quies so quies resulted in 256 bit out of y and some other quies x prime resulted in out of y prime now if this y is a problem in the south stream of this x prime so like 256 consecutive bits of quies was in out of x prime then I can conclude that almost certainly this quies x prime has been made after the quies and the definition of the random morale this is going to be true and if you understand the morale assumption you would call a hash function sequentiality this is all okay so how does the energy construction look like so you know that's that's the first message as before so the verifiers send some random coins to the proover and some type parameters let's say 6 and here's what the proover does first of all the protocol for any possible type random rule specifies some dag so if the parameter is 6 the protocol will say use this particular dag and and just random coins here how I just used random fresh random morale so you can we've heard the talks about so what I did so long and talked this morning showed us that if you just use this random coins as the quies the quies this is random morale page kind of and then surprisingly what it does is it labels dag and okay and then you can compute all these things actually you can can�을 as we really finished after this we parallelize 2 and again here okay this is our face like a child response space where the verifying child is the proover to open the subset of the nodes and the verifying proovers this nodes together with the parents and the verifying checks all the openings are great and that all the labels are really the hash of the, of the labels of the paradigms. And, yeah. Now, this is not what I told promised people. I promised to, like, go on two messes, from the top of the forest to the top. This is, again, the public coin, public coin face. So this challenge of public coins, you can just use here a channel to get, like, a two round coin. That's the energy construction. Proof sketch, how they proved that this is really proof of segmental work. They said, okay, assume that she's really dead for us. And assume, consider the time point five, where the proover, like, my malicious proover, until then, has set this commitment open to the very side. At this point, the proover is committed to his labels at one kind of premise point. So some values are supposed to be added, but he will only be able to open the commitments to them. Now, let's say that the label is bad, if its hash is not, sorry, the label is bad. If the hash of its, of the labels that the proover is committed to, of the parents, was not given that label. So basically, this means that a label is bad. If me as a proover, I open the label, and the parents, and hash them, to verify and check if it's on the fish's toilet. So then you make a case analysis. Assume there are lots of bad labels, then the verifiable cash to the proover. You know, you ask, you remember, he asked them to open some of the notes. You will cash to the high proover. Case two, there are a few bad labels. And a few, I mean, less than D, where he is, you know, in the city debt process. But if there are less than D bad labels, by definition, there exists a path going through, in the graph, if I remove these bad labels, there exists a path of like D going through all the remaining labels. And by the second challenge, if I run a more, I know that the proover must have made at least the sequential queries prior to, you know, computing is committed to that. And if D is kind of not too far away from D, that's kind of the sound problem. That was the first requirement. So how does the new construction look like? And so the new construction is very nice in the sense that it's very modular. It's clear which component has which job, right? So there's this merge of few committees that commits these notes of this color back. So that instead for us, it has this property that if I remove the notes, there's this path. So, you know, every rule is kind of really, really best-ranked. So let's forget about that. It just consists of like everything you want. It just consists of like one big graph, actually a very simple graph. Let's start with a tree, like a merge of people. So this is going to be this committee, this will be the notes of our graph. Let's add some edges to it. So what edges do we have? For every leaf, for example, this color sheet here, we consider the path underneath to the root. And then we add the edge to this leaf. From all these symbols, actually from all these lefts, right, this is the leaf, this is the left, this is the left, this is the right. So again, we consider all the notes that we've opened in the vermin, we commit it to the opening of that leaf, and if there are lefts, there are lefts in the edge. Why not? And we do that for all these. Then we end up with this graph. Okay? Now, before I tell you something about the properties of this graph, the protocol now proceeds basically as an enemy. So the prove or compute the label of this particular graph, then it sends the label of the root, which kind of functions now like a verb, if you commit it over to the prooper, and then the verifiable charge of the prooper to open some leaves, for example, this leaf here. And the prooper will open this leaf, and once this medium opens this leaf, it means it has to be this note, this note, this note, this note you already sent over. And the verifiable, first of all, you know, like the verb, if you commit it, check like all these consistencies of the two edges, that's not everything that hashes together that make it here. But of course, on top of that, you will also check that the parents, so you will hash the parents of that note, and see if you really get this part of the label. And not that conveniently, all the parents of that note are a subset of the notes that you have to open anyway, if you want to open the verb, if you commit it. So it starts, so the communication of that is not even bigger, than just a standard verb to open it. The prooper has to be a little bit more attached, because he has to additionally do this one, and not that just as fast. Okay, so this is such a simple construction that it's hard to believe that, and I will show you that it's a very nice property, it's hard to believe that no one in Himalaya did it before, and actually that's what I thought about a few hours ago, but it turns out, so there, you just told me that, and I look at that, TKC, TKC, they had a paper about time-stapping, and they used to basically the same graph, we're talking different problems, so basically they used this graph to give certificates, where you generate these certificates one after the other, and once you have two certificates, you're kind of verifying that one certificate was generated after one certificate was found strong. So basically they also had this graph, and you generate the certificates by, so I haven't really paid the certificate. So here is the proof sketch. So as before, once the prooper set this commitment, you know, this label of the route to the prooper is committed to these labels of all distance, and again let's say label i is bad from the hash of, you know, of the, what is this pass? So now let's say the prooper try to catch, what does it mean? It kind of committed to labels, whereas the red ones are not really the hash of the labels of this pass that it generated. Now there's the final set S, which contains all this badly, all these bad nodes, plus everything below them. So here, this is this bad node, you take everything below, here is not about everything below, he's already that same as. So here are two claims that are very, elementary claims about, you know, two milestones. The first is, I claim that, you know, no matter what this bad labels are, we find this super set S, I claim that there is a path that is going to all the remaining, to all the nodes that are not in this set S. So in this case, we'll just, you know, this path here, this is the first thing. The second claim is, the probability that if the, it's from a verified chart, just prove it to open some random leaves. So the probability that the random leaf, the proof rule will not be able to open correctly, is exactly, you know, the fraction of the set S over all the ones. So what does that mean? This means that, you know, if you combine this with my instance, the same thing that, if the proof is cheated, and instead of making T sequence really kind of faster than the way you make T minus epsilon, epsilon zero over one or something like that, then if you get a state of challenge on whatever, hundreds of challenges or something like that, we will call, we will kind of catch it with the probability zero over one or something like that. So very, you know, very, very simple parameters. You got that, you know. That's the instruction, it's a little more kind of expanded. Which means that I can actually tell you a bit about the motivation points for a bit. This is about the sustaining the sustaining the blockchain. So, I think everybody here knows about the Bitcoin instance. Everyone knows in the world what work are. So you know that Bitcoin, so securing the blockchain online Bitcoin is based on this piece of work. So it was like, this gigantic fact piece that nothing else than having dedicated the hardware for producing this piece of work here. This is a problem for various reasons. So first of all, it's an ecological problem, but there are some estimates that say that the energy consumption of this Bitcoin mining is comparable to Denmark or some country around the size. Also, the hardware is not useful for anything done with mining. There's also like a waste of these kind of resources. It's an economical problem for the long term sustainability of the chain because you really constantly require miners to dedicate a lot of energy, so they have to be rewarded for that. And these rewards have to come from somewhere, so either the currency will kind of face some significant inflation, or people can hide its actions. And Bitcoin actually solves it by having high interest now with the hide its actions in the future. And there are other issues that I don't want to go into. Security issues. So the question is, I'm kind of like a more sustainable watch. We have heard this morning, you talked about the state, which is probably the less-investigated alternative to the groups of work. So the idea would be there that we use basically the currency itself as a resource. That's, you know, if that works out, it's not gonna be great, but it's really very challenging. So another approach would be to just use not a reasonable resource, so not the currency itself, but like a resource that is less wasteful and, you know, everyone would be computationally right. So if you really know that the two main resources that we investigate are time and space, so we're working on time, let's try to use this space. And there is an abundance of this space out there. So many of you have like a laptop and you probably have like 100 gigabytes of, of course, some of you would have 100 gigabytes of this space. So if you could just use this hundreds of gigabytes space, basically, for mining, this would be kind of like basically no marginal cost points. You could just do that and many people do that, and we love to negotiate this space. Yeah, so, but using space kind of, you know, let's say you're using space, and it was put to space, and it was put to where it's kind of far from here. So let me recap what Bitcoin has to do with us. Bitcoin uses computationally as a resource, and the idea is to basically say that the probability of mining will improve our work first, and being able to kind of add up all the log chain and just get a lot more than the transactions, roughly correspond to this fraction of the whole hash account. And Bitcoin has this nice thing that is that there's this hardest parameter that gets like in the category that we could be, so that also can roughly have a 10-bit, right? So if you remind me, you sit there and every now and then, so it's excellent. So, so what, you know what it's, GIA almost wants to do is replace the proofs of work in two proof systems, proofs of space, and the non-proof system proof of time, or proof of sequential work. So if you use space as a resource or as proofs of space, I haven't defined that, but let me just say this is the proof system where a miner kind of initializes the space, that takes time, but later if he gets a challenge that you need to dedicate to that space, he can, everyone has to do the proof and dedicate that space in the middle of it, like just doing some constant, double the cost of the basis, so it's super efficient. But this is a problem if you want to use the log chain, because now suddenly it's not like one person has proof of work, it shouts out and gives a block, but like everybody else, every miner, there is a new new proofs. So what we do here is basically assigning quality to this proof. So this is an idea that already has been done for example, in the log-in for proof of space beam, later it has been referred to as proof of graph, it's also actually proof of state, proof of cost, but basically the idea is you generate the proof, you hash it, or assign it, or so, and you get some kind of number, and this is the quality, and the lower, in our case, the lower this number is against, the value of the value of the fault. And now, to get the dynamics like the main coin what we'll do is we will run the proof of sequence to work on top of that proof of space, and the challenge for that, exactly, and the time that this proof of work has to run on top of that proof of space, to finalize the block, if that's genuine, actually block, depends exactly on the quality of that. So if I can generate proof of space, it has awesome quality, I will announce it and what you will finalize that block, and that's the end of the piece. So I'm pretty much out of time. So this is a slide, though I didn't get any questions. So you said that it's a problem that we can generate many different proofs, and you can comment on that, why is it a problem for the blockchain and why in your construction you can generate many different proofs? Yes, okay, so as I have this slide anyway. So in this point, you know, blocks are just, you know, it's a chain of blocks, and what the block contains, is basically like this proof of work, so like a seed that hashes to something that, yeah, so a seed that, yeah, okay. Now, in GL, what we do is we alternate proofs of space and proofs of sequential work. So this is a proof of space, it's a proof of time, and the challenge for that proof of sequential work, and also the time it requires, it kind of depends on the previous proof of space. And then on top of that, once this thing is finished and somebody announces it, you can put proof of space in top of that as well. Now, this chain has absolutely no transactions or nothing, and everything is totally, needs to be totally conned up. So this proof of work has to be a deterministic function about proof of space, and if you remind them to generate proof of space, you should not be able to fiddle around with it, to get many different proofs of space, the only choice you have is to be the renouncing or not. And why is it important? It's important because there is something that's going to grind you in the back. So where, because the challenge for this thing depends on the proof of time or what's going to work, if I could try out a million different choices for that proof, I could pick the one that gives me a very good challenge for my proof of space, because proof of space is so efficient at computing. If I can just locally generate a million different ones to pick the one I like most, and then I'll start. I don't know what's actually why is it much easier than it may be. Oh, I agree. So in the construction, the problem is, the problem is if I just cheat on a single leaf, like this one, then it will result in a different commitment, so it will get different proofs. On the other hand, if you just cheat on a single leaf, the probability that I will fail the verification phase is very slow, it's very low, because it's just kind of something like the number of challenges divided by T. So think of T as between 30, if there are like one more, so you see, I can just go locally to little changes, once I get that proof, I can do it, because it's gonna be all here, Xiaomi and Apple, so I can do low changes and generate like lots and lots and lots of proofs. Okay, so let's take the speaker again.