 So I will talk about the proof of space and this is something we use in the application of distributed storage. So the setting is we have some storage providers in some network in some decentralized network that want to onboard storage capacity to the network, and they are incentivized by some rewards. So they need to ensure that they provide this disk space for the network and prove it. And nodes in the network will be able to verify that this data or storage capacity is maintained, secured and indeed permanently provided to the network. So this requires some checks and some checks and some proofs of storage. And the tool we use here is vector commitments. As we've seen we can commit to a data vector and in a very concise manner, and then later on open some positions of the data to make sure that someone is really storing the that that vector. Okay, so how exactly it works for Filecoin. The proof of storage requires to split the data into sectors of 32 gigabytes. And because this sector can be any kind of data, we like to encode them in uncompressible in an unsuppressable manner, and then commit to this encoded 32 gigabytes. And we submit the commitments in into the chain. So later on we can check that the the storage is still is still there. So what we will do later, we will be asked the storage provider to show us some positions in this factor that they still store so we open the commitment to some positions and the proofs and check the proofs with respect to the commitments we had before in the in the blocks. So as I said the first step in this process is to take the data and encode it, what we call seal it into an replica, which is another version of the data that is incompressible. So to make sure that the storage provider is indeed dedicating that amount of space on their disk to the network. The data should not the data storage should not be compressed. This encoding or sealing process is some very large computation that takes a lot of time and we know exactly we can estimate exactly how long it takes. And we make sure it's, it takes a lot of time. So it's not possible to do it, like in a short period when you are just challenged to show that you have a replica. So that's the assumption replication like encoding the data is long so once you have the replica you better store it and not delete it and try again to encode it from some compressible date. So what a storage provider they will do with commit to the data will commit to the this replica after it computed and put them into the chain so put it in the block submitted as my commitment that I store some replica in in my disk. Okay, so in order to make sure that indeed this commitment to the replica is a commitment to something that is incompressible. In the first phase, we will just ask the storage provider to demonstrate that indeed it did the encoding to the replica. So how this proof of replication works is asking a challenge to the storage provider, which has to prove that it encoded the data so it opens some some positions in the in the data initial data and some position in the replica, and then proves that the relation between the openings is valid with respect to the process which is a public process of encoding. So indeed it followed this encoding strategy in order to obtain the positions in the replica with respect to the initial positions in. So this is a large computation they had to do so proving a large computations also requires some techniques to make it fast to verify. And we use this succinct interactive arguments of knowledge, known as nice in order to prove such a process and to make sure that the, the, the storage provider really had a valid replica that it was committed to the J. In this case, we trust that the committed replica and the committed data, the user submitted to the chain is correct because this Nike verifies with respect to the encoding procedure. And then later on, now that we trust the initial commitment, we want to make sure that this is a persistent storage so the prover should the storage provider should continue to store the data should continue to allocate this disk capacity to the network. So to make sure that this, this is the case, we query now and then like periodically to the storage provider, some position to be open from from the replica. So the storage provider will only store the replica and the commitment to the replica, it can even delete the data because it can be recovered from the replica. And then it's challenge to open some random position in the replica and demonstrate those are valid with respect to the initial commitment to the replica which is trusted it's trusted because it was already proven to the proof of replication. And it submits this proofs to the chain. So, because deleting the replica will ask the prover to do a long computation to recover it from some compressible data. This cheating strategy is not valid because we need to, to answer fast the squares of opening some positions in the replica. And now in practice in our systems we are using Michael trees and not new vector commitments so we are using something that is not that succinct it doesn't have really updates. We added updates, properties or other fancy features of new vector commitments. So we just take the data and we hash it into a tree to a commitment, which is the root of the tree. And the encoding the proof of replication so showing the encoding will ask us to open some position here in the, in the data in the replica and in the data and show the relation between each other, the relation is shown by a snipe. But what's, what's worse is that even aggregating position for vector commitments, which are market is really need snipe in order to make the proof compact so we will snipe if I everything. So this requires a lot of snipe in the proof of replications, which is not optimal for the proof of space time. So we will ask these challenges and we have to open different a different position in the replica. So opening many position in in my country also requires many proofs that are logarithmic time. And since we want to avoid having a very large proof we will put a snipe on top of it so we will. We will give many many opening for my countries into the snipe and submit that snipe to the to the chain. So future direction will be to overcome this limitation of my countries, where my countries are not always very snipe friendly so they are not compatible with the snipe algebraic structure because of this hashing. We will like to find maybe another way to to commit to this initial vector into the replica vector that allow to prove knowledge of a sub vector more efficiently that just put many opening for my country into snipe. And we also need something for the replication which is an overhead today, because we need snipe in order to prove that some openings of a vector commitment satisfies some some some property. So this encoding property. So something more compatible with this in terms of vector commitment will also help to improve our protocol. So some of the open programs were also mentioned by value and they are of independent interest but they are also very useful for this use case. Functional vector commitments with more expressive functions so we can hope to open position and show that the position satisfy some function. So for the proof of replication, of course transparent setup was quantum which is always good to to prevent the future quantum attacks aggregation for subject to commitments as well. So don't necessarily need to show the openings to the very fire button only an argument of knowledge of some opening suffices for this application structure preserving vector commitments, if we want to replace the market with something like uses vector commitments, and trade offs for storage and computation for progress as value already presented. So those are those are the open question and now I'm taking your questions. So I don't see anything so maybe we should move forward. We can always have a Q&A. discussion in between talks.