 Okay, hi everyone So let's start with a little bit of motivation even though you've already seen a little bit in the previous talk so our story is We're trying to get consensus Doing some kind of agreement protocol and getting consensus is hard, right? This is something that we've studied for many many years and It's still studied today in the context of blockchains and other contexts And the main challenges in consensus. You don't know who to trust, right? There's multiple parties some of them might be malicious and Actually, if you're talking about the permissionless setting where we don't have identity verification And anybody can join the protocol then it's just simply impossible to get a consensus protocol, right? And the reason is that the adversaries can just create multiple copies of themselves And so they're always in the majority, right? And on the other hand, we know that we need an honest majority to get consensus So What can we do? So this is I think maybe the the main interesting like new idea that Satoshi Nakamoto had with Bitcoin is that instead of counting people doing a consensus with a majority of people What we're going to do is count resources and if we use resources that are publicly verifiable So we can check that somebody actually expended their resource Then now we can switch this assumption from an honest majority of parties to an honest majority of resources And suddenly the problem becomes possible even in the permissionless setting So what kind of publicly verifiable resources can we Can we check? So one thing that's very well known It's used in a lot of cryptocurrencies, but it's known From way before is proofs of work, right? So these are very very simple They're easy to do people have implemented them many years ago, but they have a big problem They're extremely expensive environmentally right the I think Bitcoin usage is now Using as much electricity as a medium-sized country. So this is bad. We don't want to use proofs of work So what else can we do? So one suggestion that's Getting more popular is to use money Unfortunately, we don't know how to get a publicly verifiable proof of sort of regular currency I can't prove to you that I burned dollars At least they can't do it electronically But in the context of cryptocurrencies, we actually can prove things like that. This is what's called the proof of stake I'm sort of using the cryptocurrency money improving that I've used it However, these are a bit problematic. They require usually some non-standard assumptions such as proof of not proofs of secure erasures And they also have some inherent vulnerabilities such as the 51% capture attacks of somebody ever gets Majority of the money in the cryptocurrency Now they'll have a majority forever There's we can't tell that this happened and we can't sort of undo this So, you know, it's not clear that this is Completely breaks everything, but it seems bad that this would be the only possible solution So this is the motivation. We're trying to get a different resource. So what I'm going to talk about now is The definition of course, it's going to be a little bit hand-wavy because I'm not going to get into any details and What we want in a proof of space time ideally is basically a proof that we used Um disk resources, so What is a disk resource it I filled my disk For with a certain amount of space on my disk for a certain amount of time So we'll always talk now about a unit of time, but this of course can be whatever you like An amount of resource I expended is the size of the disk times how long I've used it for so this is why it's space time We haven't yet included gravity in this So in a in a proof of space time we have Two phases we have an initialization phase where the prover basically generates the data that they want to fill their disk with Now why do they have to generate it themselves it's because we want there to be low communications so For the things like proofs of replication where you want to use useful data I actually have to receive all this data to store But if I'm trying to use a large amount of disk space and I want to keep the communication low the data has to come from me so I have to be able to generate it myself and In the second phase which we call the execution phase This is something that we run every unit of time and I prove that I'm still storing this data so if I did this correctly then I I've shown you that I filled my disk and that I'm still keeping this disk full For however long I'm running these execution phases Okay, so that's the ideal Sadly, we can't quite get that Why because if we have low communication and in the absence of other sort of strange assumptions Such as timing assumptions and things like that, right we can't We can't prove that I've actually stored the data why there's always this simulation attack. I can store small random seed For creating the data and then instead of storing that the data I can just run the initialization phase again with the same seed and I'll get the same results So there's no way I can prove that this is not what happened when I run the proof phase Okay, so this is bad So what do we actually prove? Instead of proving that I've definitely stored data We prove an or statement. So either I've actually stored the data or I've done enough work to reconstruct the data every time Right, so this is something that sort of is inherent in the definition I allow the prover to trade work for as space time Okay, so why is this still good? Why can we still use this as a proof of a resource consumption? Well, the idea that the cost of recreating the data is going to be high and In particular, it's going to be more than the cost of just storing the data And if that's the case then rational parties would rather store the data than recreated So in the context of things like cryptocurrencies, this is definitely good enough, right? Because rational parties will store the data the polar bears the polar bears will be happy and It's still okay in terms of security even if the adversary is not rational Because the cost of the adversary is still going to be high no matter which strategy the adversary chooses So now our new assumption instead of an honest majority of disk space or an honest majority of CPU We're going to assume the combined resources controlled by the honest majority So when we say the combined resource we we have some trade-off factor of however much storage costs However much CPU costs and we're talking about the majority in terms of cost Okay, so what do we actually achieve in this paper? We first get a very very simple construction of a post that poses a proof of space time Let's secure in the random oracle model. It doesn't need any other assumptions this Construction has an adjustable initialization difficulty Which means if the price of storage compared to the price of CPU changes, or if I want to increase the length of time that I Require parties to store the data which increase their storage costs. I can also Turn a knob and increase the initialization cost to make it still rational to store things and not only that we can do this incrementally so if you've initialized your data with a certain difficulty and then later we decide that actually You know the price of storage has gone up So it's no longer rational to store things we need to increase the initialization difficulty We can do that by just doing the delta work that you need from the initial difficulty to the new difficulty You don't need to redo everything And we have a nice market-based mechanism to determine what is the actual difficulty that you need to make it rational So basically we can detect if parties are Using work instead of storage and use that signal to increase the difficulty when needed Finally we actually implemented this it's simple enough to use as part of the space mesh consensus protocol Which is a cryptocurrency based on proof of space time Okay, so For those of you in the audience who've you know been in this field for a bit, you know there been Several papers already. This is a bit confusing. There's some of them are called proofs of space. This is proof of space time So what's the difference? Well, there's several differences I think the highlights are the proof of space proof of space constructions are fairly complicated they require these graph pebbling arguments and Actually to implement them is also a bit more complicated because you need to Generate a graph with a certain topology Whereas our construction as you'll see in a moment is extremely simple and the arguments are just basic information theoretical compression arguments We have an adjustable initialization difficulty which means as I said if you need to increase the length of time suppose you want the time between proofs to be one week or two weeks or a month then You can just increase the difficulty of initialization to make it still rational to do that Whereas the proof of space constructions their initialization difficulty is actually tied to the size of the graph that you're Generating which means if you want to increase the difficulty you either need to increase the verification costs or increase the amount of Data you're storing But all is not rosy. It's not that we're strictly better The real advantage of these proofs of space is their very the prover runs in Polylog in the space so the prover is much more efficient in our case The prover actually has to read the entire storage for every proof So there these the results are actually incomparable and think of them as working for different parameter regimes So if you want to have a proof every ten seconds, you probably don't want a proof of space time at least not our construction If you want to have a proof every month, then you probably do want this proof of space time Okay, what about memory hard functions? There's also things that are highly related But they're actually doing different things so these these are just getting something else So memory hard function is something that gives you a lower bound for the amount of space or space time Even that you're using while you're computing the function, right? So you have the complexity of the computation itself whereas the proof of space time Gives you a lower bound With this trade-off that I mentioned for the amount of space time you're using between the proof computations So basically if you're using proof of space time Sorry, if you're using a memory hard function, right then There's nothing preventing you from reusing the memory You can't use it as a proof of space time because you could just use the same memory over and over again on the other hand if what you want is a memory hard function you cannot use a proof of space time because It actually doesn't give any lower bound on the amount of computation You'll see our construction actually requires very little memory to compute Okay, so we've got the what we want to do out of the way now. How do we do it? So the very high-level idea is what we're going to store is a table and every entry in the table is going to be a proof of work And this gives us very fine-grained control over the initialization cost because proofs of work We can tune them exactly how much work do you need to do for each entry and this will say how much work you need to do for the entire table and It's also very easy to verify their table entry is a correct proof of work. We just use the proof of work verifier so that's great and What do we do in the execution phase? So What we'd like to do is say, okay The verifier is going to query some random points in this table and if you didn't store the table You won't be able to answer them without doing work But this actually doesn't quite work. Why doesn't this work because the prover can actually store nothing and once the verifier queries The prover can just reconstruct that particular cell that he queried So the response to each query will always be correct But the actual amount of work will be small They only have to reconstruct a few cells and they don't have to store anything. So this is definitely not a good proof of space time So what do we actually do for constructing a post? Well, the initialization actually does work like that. We fill the table with proofs of work But the execution phase is just a little bit different Basically, what we do is the verifier is going to send a random challenge and The prover is going to commit in the execution phase to the entire table and Now the verify asks random queries and for each query the prover needs to show that this is actually the Position the table that the prover committed to originally and that this is a good proof of work and those are both things that are fairly easy to verify and Okay, why does this work? No, one one thing to note is that the commitment here is in the execution phase So every time I do the execution phase, I do a commitment This is actually what takes us a long time and said I have to read everything I have to read everything because of this in our construction It doesn't help it all to commit in the initialization phase because the attack I used previously Still works right I can commit to the entire table and then forget everything and just reconstruct the things I need If if I only committed in the initialization phase Okay, so what why does this work again the intuition is very simple the prover has to decide before Responding before committing basically which cells of the table. It's going to reconstruct if it hasn't stored them already and So we can decide you know, I'm going to reconstruct some of them But anything that it didn't reconstruct it's committed to being bad And now when it gets a random query if there's a large or significant fraction of the cells that are bad Then it's going to get caught with high probability So basically what this means is it has to spend either storage to store the cells or CPU to reconstruct the cells and Before the commitment it has to have a mostly full table Okay, there are some subtleties So as I described it this doesn't quite work I can't say just use any proof of work and do this The reason is that the proof of work only says I've used work But it could be that the proof of work allows the adversary to do some work and then to compress the results So maybe after doing the entire work to fill the table I can compress the results to something much smaller than the table and now I haven't used the storage that I need so What we need is some notion of incompressible proofs of work And this is a little bit different from There's of their standard notions of incompressibility because we have a random oracle so I can't just say you know the output of the random oracles random it cannot be compressed because I Can always query during the decompression the random oracle so it needs to be incompressible even in the presence of a random oracle Luckily the standard proof of work using hash we show that it is incompressible As long as what we store is just the nonce We don't store the output of the random oracle because that is could be very large And I can compress it by only remembering the nonce so If that is a if that's what we do then this is an incompressible proof and this whole construction does work okay, so Something that I haven't shown you in this talk But are in the paper the market-based the mechanism for detecting when users are using work instead of storage and There's this an additional subtlety with how much work I need to fill a Certain-sized table and for some parameters. We actually want to use a different proof of work Also extremely simple just run the hash once and take a few bits and we show that that is also incompressible Now there are also some open questions unsurprisingly One of them is can we get the best of both worlds in terms of proofs of space and proofs of space time? Can we get something that has low prover complexity and also this a nice incremental or adjustable even? difficulty And of course we've shown two proofs of proof of work constructions Can we show that other proof of work constructions are also incompressible and can be used in this framework? Thank you Many thanks for the nice talk. Are there any questions? I guess it was so simple that everyone Is so blown away so let's thank the speaker again in the next talk as the invited talks, so please go over