 Hello, everyone. My name is Shrini, and today I'm going to be presenting to you joint work with Shashank Agarwal titled Quack, key value commitments for blockchains and beyond. So let's get right into it. First, some motivation. And the motivation for our work quite simply put is the stateless validation of blockchains. As a quick recap, the key players in a blockchain setting are the users who generate transactions, the miners who generate blocks that contain these transactions, and the validators who verify that these blocks were indeed produced correctly. More precisely, the job of a validator is to verify that the state of the system is consistent with time. The state of the system for a blockchain like Bitcoin would be the entire set of UTXOs, and in a blockchain like Ethereum would be the entire account ledger. This means that in order to verify the state of the system, the validator would have to store the entire state of the system. And now you can imagine where I'm going with this. The state of the system has been growing rapidly with time running into millions of UTXOs or Ethereum accounts. This disincentivizes people from validating blockchain. The role of a validator is too hard or too demanding, and this is not something we would like. We'd like as many people to verify the blockchain as possible so that we don't have to trust other people on their word for the correctness of the blockchain state. The solution is to compress the state of the blockchain in some sort of verifiable fashion, using primitives such as accumulators, vector commitments, or key value commitments. The users in this context would have to work a little harder to help the validators. They would have to produce proofs. Proofs that what they're claiming about the state of the blockchain with respect to themselves is indeed correct. Furthermore, they have to keep their proofs up to date because the proofs are indeed going to change as the state of the blockchain changes. So this is a small price that we need to pay. The users have to work a little harder, but as long as these operations are efficient, this means that more and more people cannot be incentivized to validate the state of the blockchain. Accumulators work really well for the UTXO model. Accumulators would give a compressed representation for the entire UTXO set. Users can provide proofs of membership of their UTXOs in order to spend them, and then the validators will be able to verify these proofs and conclude that, yes, the UTXO does belong to this user and it's direct to spend it. For the account-based model, though, a primitive that's better suited is that of a key value commitment. The reason is the account ledger is indeed a key value map, and so a compressed representation of that would be a key value commitment. And now users can provide proofs with respect to their keys, proofs of the values with respect to their keys in order to update them. Think of the balances associated with accounts and people wanting to make payments as a simple example to illustrate this. The validators will be able to verify these proofs and hence conclude that blockchain is being altered in a correct way. Key value commitments formally defined have a key generation algorithm that outputs public parameters and the initial commitment states the empty key value map, an insert operation where one can insert a key value pair, obtain a proof, as well as update information that would be sent to the rest of the players in the network so that they can keep their proofs up to date. You can update the value corresponding to a key by some additive parameter delta. We're thinking of additive updates where the value v corresponding to a key k can be changed to v plus delta using this operation. The reason for looking at additive updates is again motivated by the example of having a balance associated with an account. So I want to either add some money to my account or spend some money from my account, which corresponds to an additive update where delta can be either positive or negative. This also releases update information that helps people keep their proofs up to date. The proof update algorithm helps you update your proofs and finally verification tells whether the proof corresponds to a particular key value pair within the key value commitment. What do we require of the key value commitment? Well, correctness. I'm not going to define this formally here, but it is the most obvious thing that you could imagine, is that if I insert things, update a bunch of things, and finally produce proofs that were all kept up to date, verification goes through. And on the flip side for security, we require key binding, which states that it should be hard to produce a proof of any value for a non-existent key or proof of an incorrect value for an existing key. So do key value commitments just to directly give you everything you need for stateless validation of account-based blockchains? Well, not really, and there are some issues, and we'll discuss them right now. The first one is concrete efficiency. We would like for the efficiency of the system to not depend on the bit length that the value is being committed to. The next one is a bound of the size of the map. Many prior constructions require an a prior bound of the size of the map, not only that in some instances, the efficiency of the construction and or the size of the public parameters grow with the size of the map. Finally, updates require knowledge of prior value. In our context, this would mean that if x wanted to pay y $10, x would need to know y's current balance, which is not something that is desirable. Put all of these things away, and that's what our work provides. A key value commitment with efficient proof-update verification operations that's independent of the size of the map that's being committed to, in other words, you also don't need an a prior bound, as well as having the property that additive updates are oblivious to the prior value. This means that if x wanted to pay y $10, we can do so without knowing y's current balance. Our roadmap for today is going to be first discussing two building blocks, the insert only and increment only key value commitments that we put together to build the final quack key value commitment construction. I'll finally talk about aggregation and what some concluding remarks. So let's begin with the insert only key value commitment. We're going to be working in groups of unknown order. So g is a generator for a group of unknown order. The key value commitment takes the form presented up at the top of the slide. The key value map that's being committed to corresponds to tuples of the form Ki Vi, and the values Zi are just distinct primes corresponding to the keys Ki. As you can see, presented this way, the key value commitment simply has in it a group description, a hash description, and two, just two group elements. So how does this key value commitment evolve with time? It starts off with the tuple 1, g. When you insert in k1 v1, you get a new tuple g to the v1, g to the z1. After inserting a second key value per k2 v2, you have g to the v1 z2 plus v2 z1. g to the z1 z2. If you insert a third one, it looks as follows, and so on. On this next slide, I would like to show you how this key value commitment actually gets updated with time. It's not hard to see that using the prior state of the commitment and the incoming key value pair kv or zv, after hashing k we get z, we can actually very easily update the state of the key value commitment. This corresponds to just raising the previous commitment values to either z or v and multiplying them appropriately, and the structure of the key value commitment makes this really easy to do. I hope the math on this slide helps illustrate that point a little better. So this key value commitment supports very efficient insertion. With just three exponentiation, one multiplication, one hash computation, we can actually update the state of the key value commitment on an insert. The proof corresponding to a key value pair akin to many of the prior constructions is going to be the key value commitment of every other element in the key value map. This means that the proof obtained on an insert is simply the old commitment value. The proof update is going to be further insertions, and the proof verification is also an insertion. You would just take the proof, which is a commitment to everything else in the map, insert your proposed key value pair and check that you get the commitment back. This means that not only are these operations possible, they're as efficient as an insert, which we know was very efficient. Correctness follows by inspection. I'd like to talk briefly about key binding. So let's say we have two proofs, pi and pi prime corresponding to some key Ki, for two different values in the key value map. Since the proof verification is essentially going to be an insertion, if both of these are valid proofs, both of them verify, and that's what's written down at this step here, one can actually show that depending on how we choose these values Ki, if we choose them to be a somewhat large prime, then the second components of these two proofs must actually be the same, not computationally, just existentially. Using that, if we try to argue about what happens with regards to the first component, we can show that key binding comes down to the hardness of computing this particular value. So let's zoom into this. So key binding comes down to the hardness of computing the Zi-th root of some strange quantity that involves all the other Zs, as well as the difference Vi-Vi prime of the actual value, as well as the fake purported value corresponding to the key Ki. Again, depending on how these Zi's are chosen, if Zi is a large prime, Zi is actually going to be co-prime to Vi-Vi prime, the delta, the difference, as well as every other prime there, because all the primes are supposed to be distinct. This means that if one can compute the Zi-th root of this quantity, one can also compute the Zi-th root of G, which is actually a break on the RSA assumption in this group of unknown order. So this is how we show that key binding holds based on the RSA assumption. Okay, so that was the insert-only-key-value commitment. Let's move on to the increment-only-key-value commitment. Here, as the name suggests, the goal will be to design a key-value commitment where we can only increment, and specifically increment by one, the values corresponding to any key. Here the key-value commitment is going to take the form presented here. This is very similar to the RSA accumulator, just that we can insert a particular value more than once, and that corresponds to essentially incrementing the value associated with the key. As before, we're working in a group of unknown order, and the Zi's are distinct primes corresponding to the key's Ki. Here, the commitment is again succinct. It just corresponds to the group description, the hash description, as well as a single group element. How this commitment evolves with time is easy to see. So for instance, it starts with G. If we increment the value corresponding to the key K1, we just put in Z1. If we then increment the value corresponding to key K2, we raise the commitment to the power Z2, and now we can yet again increment the value corresponding to key K1 by just inserting another Z1. And now we can see that G to the Z1 squared Z2 reads that there is key K1 with value 2 and key K2 with value 1, and so on. So the increment as we will observe is just an exponentiation to the appropriate C. A single exponentiation, a single hash, very efficient. What is the proof? We'll ask before. We're going to try saying that it's a key value commitment or an increment-only key value commitment to every other key value pair that's inside the map. The proof update would just be an appropriate increment, and the verification would be an insertion. In other words, if you put in the right number of Z's, then you get back the commitment. Unfortunately, you can produce fake proofs in this context, and the reason is simple. If you look at this commitment state G to the Z1 cubed Z2, where I have key key 1 with value 3 and key key 2 with value 1, I can produce proofs for the value corresponding to key key 1 being 1, 2, or 3. Because as long as there are three Z1's inside in the exponent, I can take out as many as I want. I can store these values that were generated prior to this and use them to actually prove that the value is smaller than what it actually is. So this doesn't quite work, but we can fix it. So here's attempt 2. The proof is now going to have two additional components. And what is the job of these components? Well, they're going to show that essentially the Z that's under question is coprime to the exponent of G in the first component of the proof pi1. So what this means is that, remember the fake proofs in the previous context? Well, two of them don't work anymore because Z1 is now coprime to Z1 squared Z2, nor is it coprime to Z1 Z2. It is only coprime to Z2, and that is indeed the right proof because that forces the adversary to put all the Z's outside. So we have to use a value of 3 and put in 3 Z's to G to the Z2 in order to get G to the Z1 cubed Z2. And the reason is that Z1 isn't coprime to anything else but Z2. The proof update is now going to be a little more complicated because we have to update these fill-in Bezo coefficients pi2 and pi3, but fortunately for us, Euclid has told us how to do this. So I'm not going to go through this, but I'm going to just leave it up here to hopefully convince you that pi2 and pi3 can be updated correctly as we keep incrementing more values corresponding to other keys. Correctness of the construction follows simply by inspection. The next step would be to formally prove key binding. Now, formally showing key binding turns out to be a little more of an involved exercise. Using techniques from before, we can show that the first components have to be the same, but completing the entire proof to show that this construction is key binding and finally we will show that it is key binding based on the RSE assumption involves a few more steps. In fact, it has a cute step in the middle that involves showing the Bezo coefficients of coprime numbers or themselves coprime. It's a very nice proof, a little more involved, and I'm going to leave it out here, but you can take it from me that this construction is actually key binding based on the RSE assumption. Now we're going to talk about how we can put these two ideas together to build our final quack key value commitment that supports insertions as well as updations by arbitrary values. For that, let's first recall the insert only key value commitment. One really nice property of this construction is that the exponent of the first component is linear in the VI's and this is kind of exactly what you would look for if you were trying to perform additive updates. This actually means that we can already perform additive updates that are even oblivious to the prior value because all we need is g to the product of all the zj's that are not where j not equals i and then I can update VI. Unfortunately, this computation depends on m, the number of key value pairs that are already inside the key value map committed to. We would like to do away with this, and here's what we're going to do. Now we're going to quack construction, which takes this very complicated form. Simply put, what's happening is that we're going to equate an update to an insert. We're going to do updation the exact same way we did insertion. This means that we would be inserting z multiple times for the same key as long as we're updating the value corresponding to that key, which is why you see the term z to the u where u is the number of updates in the key value commitment. Every time you update, you're actually doing another insert, but this is going to work out fine as we will show. An interesting consequence of this is the fact that the second component now turns into an increment-only key value commitment of the number of updates that have been performed. This is a reason for discussing the building block of increment-only key value commitments prior to this construction. To get a sense of how this construction, this key value commitment evolves with time, with insertions as well as updates, let's look at a few examples to start off. We have the initial commitment to the empty key value map, which is one comma g as before. Now if I insert the value v1 corresponding to the k key one, I get g to the v1, g to z1 as before. And then if I insert k to v2, I get g to the v1, z2, v2, z1, g to z1, z2. Again, this is as before. And now what I'm going to do is try to update the value v1 corresponding to the key k1 by delta. I'm going to do an additive increment of delta to v1. For that, I'm essentially going to pretend I didn't insert anything into the spot k1 and try to insert delta into that spot. So I'm going to do, as you can see, an operation that's very similar to all the insert operations before. And you can see that I'm reinserting z1 and using a value of delta. Now the commitment takes a slightly different form. You can see that the second component has z1 squared indicating that not only has k1 been inserted, but the value has been updated once. Furthermore, apart from z1, the value inside this exponent corresponds to having v1 plus delta for k1 and v2 for k2 as desired. The only additional aspect is that there's a z1 sitting outside, and that actually tells you that the value corresponding to the key k1 has been updated once. So in this way, we can make the update exactly similar to the insert. This lets us update the values, adds a few more z's into the mix, but that's going to be fine. We're still going to be able to prove key binding using the ideas from the insert only key value commitment as well as the increment only key value commitment, which is alluded to before. It now appears in the second component. The second component is actually an increment only key value commitment to the number of updates that have been performed on each of the keys. So that's our entire construction. Quack gives you an efficient key value commitment, succinct, efficient, whose operations as well as the size of the construction are dependent on the size of the map, and you can perform additive updates oblivious to the prior values. There are some nodes on aggregation. Aggregation is possible. We can aggregate proofs and batch verify them using more techniques from essentially euclid arithmetic, but aggregated proofs cannot be aggregated further, and the reason is the increment only key value commitment, and hence our final construction quack, has a non-membership proof. As explained before, the pi2 and pi3 in the increment only key value commitment are essentially a non-membership proof of z in pi1. And this is inherited in quack, and these proofs can't be... the aggregator proofs cannot be aggregated further. This is also limitation in the work of Bloney et al. The final question is, can you aggregate updates and do some sort of batch update? This is actually not possible, as noted by prior work. To conclude, we note that we have enabled essentially almost stateless validation of a blockchain barring two group elements, and this can actually help us construct a new blockchain where validation would be a much easier task. More work is needed, however, to support other features of an account-based blockchain, such as smart contracts. Another question to think about is whether one could come up with a scheme that has all the benefits that quack does, but also supports continued aggregation and disaggregation, as do some of the works today. And a final question is, can we achieve some notion of privacy in terms of the update information that's released? Can we hide the values? Can we hide the keys? And this would be really interesting in the context of designing a blockchain that also provides privacy. And of course, key-value commitments like accumulators and vector commitments and sub-vector commitments and all those great primitives would potentially have uses outside of blockchains in designing a lot of new crypto, and that still remains to be seen. And with that, I'd like to conclude. Thank you, everyone.