 Γεια σας! Είμαι ο Δημήτρος Κολομέλος, Επίσης της Κύρδας και της Τυμδιας Σοφτουαρίας, Ενσυκτήρου και Βερσταφόλ, Τεκνικά Δημόδρυλ. Αυτό το μήνυμα είναι για τις εγγραμματικές εγγραμματικές με βεκτοκομμήτητες και τις ευλογημένες για να δημιουργηθεί με τα κομμάτια της κομμάτιας. Έτσι, αυτό είναι ένα δημιουργό με τη Ματεοκάμπα Μπαμέλι από Μορχούς, Δάριο Φιόρεν, που είναι my advisor, Ατύνδεα και Νικολα Γρακόνι, Λούκανι Τσαρδό, από Πλωτοκολάμπους. Ας ξεκινάμε πρώτα, από να δούμε τι αυτές τις εγγραμματικές εγγραμματικές είναι. Αυτό είναι ένα πλήμητο, αρχασμένοι από Καταλάνου και Φιόρεν 2013, που δημιουργηθεί με ένα εγγραμματικό, ένα εγγραμματικό βαλιούς, για να μπορείς να δημιουργηθεί με κάθε εγγραμματικό, meaning that it allows you to output an opening value for some position, together with an opening proof that testifies for the validity of this value. Now a bit more formally, it consists of four algorithms. One has to do with setup algorithm in order to create the common reference string, a commit algorithm of course to create the commitment to the vector, An open algorithm that takesasc the whole vector and creates an opening proof for a for a specific position. A finally, a verify algarithm to takes a value for some position Together with it's corresponding opening proof And outputs 1, only if it's the value at this position o the commited vector. Για τη στιγμή θα δημιουργήσουμε την πρόσφυξα της πρόσφυξης, που σημαίνει ότι κανέναν να μπορεί να δημιουργήσει μια πρόσφυξη σε δυο βάλλες, meaning that you can uniquely open any position of the vector. Και finally, the interesting property of this primitive is succinctness, which states that the commitments and the opening proofs should have size at most polylogarithmic in the size of the vector. Λοιπόν, η αρχή του 2019, δύο πρόσφυγες από τη ΛΑΑΒΟΛΤΑ και ΒΟΝΑ, παρδοξεύονται αφήτησης πρόβλημα της Βεκτορικές Βεκτορικές, η οποία είναι η Βεκτορική Βεκτορική Βεκτορική. Αυτό είναι η δημότητα για να υποχωθεί μία ανοίκης πρόβλημα για πολλές πρόσφυγες της Βεκτορικής Βεκτορικής. Και αυτά για όλα τα δύο αρχές της Βεκτορικής. Αυτό το αρχή του πρόβλημα, πρέπει να υποχωθεί, meaning that it should have the same size as it would have for a single position, which is what makes this property of sub-vector openings interesting. Λοιπόν, τώρα, σχετικά σχετικά δύο πρόβλημα, έχουμε δύο πρόσφυγες από δύο πρόσφυγες για τα δύο πρόβλημα της Βεκτορικής Βεκτορικής, με δύο πρόβλημα της Βεκτορικής. But the fact is that before our work the only known vector commitment that can support sub-vector openings and constants as parameters was the work by Bonnet Thal that is working over groups of an onboard. So I also would like to mention some concurrent works with ours. One is from Tomescu et Thal and another one is from Garboun of Etthal that came out this year. και έχουν τα δύο πράγματα για τη δύο, έχουν λινιασσάμενα επαγγεσμό, επειδή αυτό είναι, στις μέτρες, κάτι ευκολασμό για ευκολασμότητα, και παραδείγματα για τα δύο, επειδή τα δύο. Και τίποτα, θέλω να μιλήσω άλλα δυο πράγματα που αφήρασαν στο ΤαΝΑΚΙΑΚΙΕΤ 2020, Μεταγραφή στην Αγγουραμμάν, και είναι also supports of vector openings. Έχει συμβαίνει τις παραμοντερές, και είναι δημιουργή σε όλες οι εργασίες των εργασίων. Ξεκινώντας εργασίες, νομίζω ότι η πιο σημαντική μου γνωρίζει, πρέπει να κάνει με το εργασίες εργασίες εργασίες εργασίες, σε εργασίες εργασίες, σε εργασίες εργασίες σε πλώντα, σε εργασίες εργασίες της εργασίας, και τελειώσεις σε εργασίες δημιουργίες, που είναι κάτι που θα δούμε σε αυτή την πράγματα. Τώρα, για τις διάφορες, εμπαιρισίες εργασίες οχείς, Πρώτα παρακολουθούντας έναν νόσιο που είναι called incremental aggregation motivated by two applications that we're going to discuss. One has to do with efficiently opening vector commitment by preprocessing, and then another application has to do with decentralized storage. So for that we formalize a primitive called verifiable decentralized storage. And then in the construction site we give two incrementally aggregated vector commitments plus two verifiable decentralized storage primitives based on these vector commitments. So we said we introduce incremental aggregation. Let's see what this property is. So first of all, aggregating proofs simply means to take two proofs for different positions and merge them into one. And this is the difference between this and sub vector openings is that now you should be able to do this without knowing the whole vector, meaning that you should only know the proofs and the opening values of these positions that you are aggregating. And of course finally the resulting proofs should have the same size as the initial ones should be succinct. Now, the incremental aggregation property says that you have unbounded aggregation and unbounded disaggregation. The unbounded aggregation means that you can arbitrarily and unbounded times aggregate opening proofs while unbounded disaggregation is the inverse operation stating that you can take an aggregated proof and disaggregate it for any arbitrary subset of its positions. And a toy example that explains a bit the usefulness of this property is when we have a network that is cooperating in order to output some positions of the vector. So the first node has one position, it outputs also an opening proof, the second node has another opening value and which adds in the set of the opening values, but now it also outputs an aggregated proof about these two values, meaning that the second node doesn't have to send two opening proofs about the two values because she's able to aggregate these proofs. And this goes on, the third one also outputs a third value together again with an aggregated opening proof. And this shows that since you are able to aggregate opening proofs, the communication overhead inside this network is optimal, meaning that you always have to send only an extra one extra opening proof. So, let's see our first application and we said the first application has to do with efficiently opening with preprocessing. Now the problem of opening a vector commitment is that it inherently takes at least linear time. So in order to compute this opening proof, you should make linear in the size of the vector operations. So, in order to overcome this, and we propose a method with preprocessing, and that is based in an offline phase, where you pre-compute aggregated proofs that cover the whole vector. And then at the online phase you receive a query, you disaggregate your aggregated proofs in order to get distinct proofs for each position of the query, and finally you aggregate back in order to answer the query. So, since we said that this disaggregated aggregate do not work over the whole vector, do not touch the whole vector, this is much more efficient than actually computing from scratch the answer to the query, the opening proof. So we said that in the offline phase you have to store aggregated proofs. This means that you have a storage overhead, which is always the cost of having a preprocessing method. So, the aggregated proofs are about chunks of some size of the vector, so if this chunk is of size b, then you have a storage overhead of course of n over b. This adds an extra overhead in the online time proportional to this b, because you have to disaggregate at the online phase, disaggregated proofs. And this is what we call a b trade-off, which is a trade-off between storage overhead and online time. And this b characterizes this trade-off, is the parameter of this trade-off. The caveat in the previous method is that, take for example, that b equals 1, so in the offline phase you need to compute all the proofs, one proof for each position of the vector. And that would namely take quadratic time, because you have to compute n proofs that as we said take at least linear time. So, in order to overcome this, our idea was to, in order to speed up this computation, our idea was to propose a divide-the-conquer algorithm. And that is first compute an aggregated proof for all the positions of the vector, which, believe me, it is very inherently very efficient to do. So, we have to take constant time, and then disaggregate in a divide-the-conquer fashion, in the left half and right half of the, for the left half and right half of the vector. And you continue like that, in a divide-the-conquer fashion, and at the end you get an n log n time instead of a quadratic time. But the interesting part is that this method is generic, we haven't assumed and assumed and construction, the only assumption, the only thing that we have assumed is that the vector commitment is incrementally aggregatable, in order to have this disaggregation. Okay, so now let's move to our constructions, and we have two constructions that are both over groups of unknown order and have both constant size parameters. One builds on the BBF vector commitment, while our second builds on the LM 19 vector commitment. So, now we're going to discuss, we're going to see how our first construction works. So, the first builds on BBF. So, as a warm-up, let's first see how the BBF vector commitment works. The first thing that we have a vector of bits, meaning that each position of the vector has value either zero or one. The first thing that we need to do is to put each position that has value one into a set, and this set we call it informally a set of ones. And then we are using an object, which is called RSE accumulator, in order to create a small binding digest of this set. So, RSE accumulators can be used to create commitments to a set. But at the same time, they give you the ability to create membership witnesses for elements in the set or non-membership witnesses for elements outside the set. So, now the vector commitment goes like this, the commitment is the output of the RSE accumulator, and for the opening, we either provide a membership witness if the element is one, meaning that its position is in the set of ones, or a non-membership witness if the element is zero. Now, the problem with this BBF vector commitment is that the non-membership witnesses specifically are not incrementally aggregatable, while the membership witnesses are, non-membership witnesses are not. In order to achieve this incremental aggregation property, we need to get rid of these non-membership witnesses. And for that, if we don't have non-membership witnesses, we are not able to provide proofs, opening proofs for the positions that have zero. So, for that we need to consider another set to construct another set, the set of zeros, and use another accumulator for the set of zeros. So, now we have two accumulated sets and we provide either a membership witness with respect to the set of ones if the element is one, or a membership witness with respect to the set of zeros if the element is zero. But this still does not have a position binding because possibly an attacker can put, for example, an element in both sides. So, we need to prevent this. In order to prevent this, we need to show that the union of the two sets, the set of ones and the set of zeros that are accumulated, is exactly the set of all the positions, meaning that the two sets form a partition of the set of all the positions. So, for this we construct a succinct argument of knowledge, which we call proof of union, that proves exactly this statement. And this concludes our construction. For the commitment, we have an accumulator accumulated value for zeros accumulated value for ones and the succinct proof of union. And finally, since the openings are only membership witnesses, as we described, we get the incremental aggregation property. Okay, so now for our second incremental aggregatable vector commitment, we take the LM 19 sub vector commitment, which unfortunately, as we said, has linear as parameters. So we have to apply some tricks to make the parameters constant. At the same time, we provide the methods for unbounded aggregation and disaggregation. And that is with our constructions. So this is a table of comparison that summarizes somehow the asymptotics of our schemes. We compare with constant sized parameters vector commitments. Now, some observations on these tables, for example, that as we said, of course, we support incremental aggregation. On the other hand, the bbf support only one hope aggregation, meaning that you cannot aggregate already aggregated proofs, while Mercury's not support incremental aggregation at all. And then this allows us to have this trade off for the pre computation that we described, while in the bbf this is not possible. So our second subject vector commitment is the most efficient one. Comparing to this tree, to this tree. Comparing to, to, to this three vector commitments with some vector openings. For example, the commitment is one group element while the, while the opening proof is only two elements. Okay, so now we move to our second application that has to do with decentralized storage services. In plain words, the decentralized storage problem is if you have a file that you, that you, that you model as a vector, and you want to decentralize its storage. So this means that you want the network of nodes to store, to store the file. The next application of this is decentralized storage networks, as for example the IPFS project or the file coin, and stateless account based cryptocurrencies where this file, this vector that you are storing specifically the key value map, and the nodes that are storing the file are specifically the owners of the accounts. In the decentralized storage, you have two types of nodes, storage nodes that participate in this, this net storage network client nodes that only wish to learn some sub file some part of the file. The next application is sending a query to the, to the network, the network should cooperate somehow in order to retrieve the opening values at this specific positions, they send the network sends these values to the client, and then the client should ensure it's wondering whether these values are correct with respect to the, to the file that and the network is storing. Now, another aspect of the problem is that we should consider updates, otherwise the static case doesn't seem to be very realistic, so we consider three types of updates, one has to do with modification, which is simply modify some other has to do with deletion from the end of the file from the end of the vector, and finally, add, at the end of the vector, add new values at the end of the vector. So, so these storage networks, and there's a general rule for these networks that says that every node should pay according to how much it has chosen to store. And this is a very important property for a decentralized storage network, since, for example, and the file that the whole network is storing is can be huge. While we don't want to exclude any storage node, even if it doesn't have big computational power or memory, we don't, we don't want to exclude this storage node. So, in this example, so for example, we can have a network that has amazing capabilities in storing in storage, for example, one next byte. While the storage node only wishes to participate with her laptop, and only can afford to to to give 100 to allocate 100 gigabytes for the network. So, we don't want to exclude these storage, and we cannot possibly assume that it can do open and it can do operations linear in the size of the of the whole file all or store something linear in the size of the file. So this is impossible. Now, in order to solve this problem. We have a directive called very fake very file decentralized storage. And this is a cryptographic primitive dealing with the cryptographic aspects with cryptographic matters of the problem. So first thing is to add a commitment for this file, which is a commitment for the whole file that the network is storing this commitment should be stored by any node of the of the network, even by client nodes. And then also storage nodes are responsible for creating opening proofs together with the answers of the of the query. So that is in order for the client to take this opening proofs together with the opening values. That's that are the answers of the query and verify that indeed these values are valid with respect to the commitment. Now, the properties that we described about the decentralized storage problem translate into properties of vector commitments. So we said that nobody should store all file linear in the size of the file data. This means that we need constant size parameters, we cannot afford linear size parameters. Then nobody should do linear in the size of the file computations, meaning that we need efficient proof updates and digest updates, we cannot possibly afford to have to compute from scratch, either proofs or digest that takes linear in the size of the file time. And finally, since the cryptographic overhead, as we described, should be minimal, we need, as we described, we need incremental aggregatable proofs. And if we take a look of what we have in the literature, if we were to use any vector commitment from the literature, then it lacks some of these properties that are necessary. While our constructions, the vector commitments that we described in the previous slides, can satisfy all of these properties. So what we did for the VDS primitive to construct verifiable decentralized storage, to instantiate the verifiable decentralized storage, is to take our vector commitments, put updatable digest, updatable proofs, so we give methods to update digest and proofs efficiently, but this at the cost of lowering the security, meaning that now we need to have weak position binding, we cannot let an adversary compute digests to compute the digest for the commitment. So our VDS primitive is nothing more and nothing less, our VDS constructions for and concretely it's nothing more and nothing less than actually vector commitments with updatable digest and proofs. So that said, there's a general statement that we can make which says that any vector commitment with these five properties, constant size parameters, incremental aggregation, updatable digest and proofs, and weak position binding, can give a VDS primitive. So that was the final statement. There are more things that can be found in the paper and these are shown in this slide. So thank you for your attention.