 Hi everyone, my name is Petter Gagy and I will talk to you today about ledger combiners for fast settlement. This is joint work with Matthias Fizzi, Agilos Kias and Alexander Reslo. The main objective of this work is to improve settlement times for Nakamoto ledgers and so let me start at the beginning and describe how Nakamoto consensus works. In this class of protocols the parties collect transactions into blocks that are then appended into an ever-growing tree as you can see in the picture and the current state of the ledger can then be read out from this tree as the longest chain and this is also the chain that honest parties extend whenever they get the opportunity to create a block. These block creation opportunities are somehow distributed fairly using some kind of an anti-civil lottery which I will talk about in a minute and we know that protocols from this class have broadly speaking to properties that if a sufficient fraction of the resource underlying this anti-civil mechanism in the lottery is in the hands of honest players then a so-called eventual consensus arises on which chain is the winning one and therefore the more blocks are collected on top of a particular transaction the more likely it is that this transaction remains a part of the ledger in the same position for all future times. Several such anti-civil lotteries have been considered to be used with Nakamoto consensus. The most popular is undoubtedly proof of work where parties try to solve a computational puzzle and the success probability in the lottery is proportional to the amount of computation invested into this solving but there are also alternatives such as proof of stake where this success is proportional to the amount of stake owned by the party as recorded on the blockchain itself or proofs of space where the success probability is proportional to the amount of typically disk space dedicated to the protocol that cannot be used for other purposes. Nakamoto consensus comes with a lot of advantages it's a simple and elegant protocol but it also has turned out to be very resilient to hostile environments in principle an arbitrary minority of the underlying resource can be controlled by an adversary trying to disrupt the system and still the protocol provides eventual consensus and this is also true in settings with fluctuating levels of participation where artists might join and leave the execution of the protocol at will without notifying others beforehand and even in this setting the protocol provides eventual consensus as long as the honest majority assumption is true for the parties that still participate in the protocol. The performance metric that we will be interested in in this work is settlement time or latency i will be using these two terms as synonyms throughout the talk and by this i mean the time between the moment when the transaction enters the system and parties start to try to include this transaction into the ledger and the moment when this transaction is universally recognized as a stable entry in the ledger and when it comes to latency in Nakamoto consensus there is an intrinsic barrier in play here which i would like to talk about so imagine a transaction that is included in a block in a Nakamoto style blockchain we know that its stability is proportional to the number of blocks on top of this of this transaction however in reality the picture would look more like this and the reason is that there is an intrinsic limit on the block creation rate in all these protocols and the reason is that we want to avoid forks if blocks are created too often and even honest parties create forks because they simply don't know about other blocks created by other honest parties due to network delays and this limited block creation rate implies also limited settlement speed as is clear from from the picture above there have been several approaches to overcome this this latency barrier of Nakamoto style protocols and with very few exceptions they can be split into two categories in the first one the protocols rely on stronger assumptions and provide improved latency guarantees in the optimistic settings where the stronger assumptions are satisfied and typically the protocols fall back to standard Nakamoto security and latency guarantees if the assumptions are violated another another group of solutions are so called layer 2 solutions which give up on the go of maintaining a distributed ledger of all transactions in the system and rather let parties settle their transactions by let's really only inform the ledger occasionally about the outcome of such settlement and both these approaches have their merits nonetheless there is still a fundamental question remaining which is what are the best latency guarantees that can be achieved by Nakamoto consensus without making any of these concessions this is the question that we are looking at in this work and the approach that we take to address it is to leverage parallel composition and let me just briefly mention at this point that in broad terms a very similar approach although in a different shape was taken also in the design of the recent prism protocol so let's move to our contributions to be able to express our combiner result results the first contribution that we that we put forward is is a new abstract model for for ledgers with the design goals of simplicity and generality so that it allows us to express the combiner results in the simplest way possible and using this new new model and new language we then provide two classes of combiners the first one is a combiner for security amplification and observe that this is the same thing as latency reduction in fact and the second one is a robust ledger combiner I'll now give you a one slide overview of each of these contributions before diving into details of some of them so first looking at our model the the basic notion that we define is so-called ledger unsurprisingly and this is a this is a basically a static snapshot of the state of the blockchain if you wish or another structure carrying a ledger and therefore it simply consists of a static set of transactions and a rank function that attributes a positive real number to each transaction in this set and the role of this of this rank function is that it both orders the transactions it captures their stability and it's also loosely related to time I will discuss all these points in greater detail later but if you want to have an intuitive example of a rank function just consider the timestamp of the block carrying the transaction in protocols such as bitcoin on top of this static notion we also define a notion of a dynamic ledger which is which captures the evolution of the ledger in time and therefore it's a time indexed sequence of such static ledgers that I described you above and this simple formalism is already sufficient to express persistence and liveness as properties of our interest and I will show you how this is done and how this allows us to capture the properties of our combiners given that the first combiner that we present is a security amplifying combiner which takes a parallel independent dynamic ledgers of the type I just described to you and produces a single virtual dynamic ledger on top of those which is again a dynamic ledger of the type we introduced in such a way that this this combined construction allows for something we called fast submission where transaction can be entered into all of the underlying ledgers at the same time and this provides a settlement type speed up by a factor that is linear in M which is the number of the underlying ledgers but at the same time the construction also allows for a slow submission mode if you wish where the settlement time guarantees are essentially comparable to those of an underlying ledger this basically means submitting the transaction to a single single one of the underlying ledgers and there is this additional multiplicative factor of logarithm of M in play here and let me already mention at this point that typical settlement time in a Nakamoto style ledger to obtain a negligible error is proportional to the security parameter and therefore if we allow the number of chains in our construction to scale with the security parameter then we obtain a construction that provides constant time settlement only with negligible error and this is the first time such a construction is provided and finally the second combiner that we present in our work is a robust combiner meaning that the game combines and parallel dynamic ledgers into a single one having a robustness property which means that some persistence and liveness guarantees are maintained even if a minority of the of the underlying ledgers are fully corrupted and do not provide any any reliable state anymore this treatment illustrates the versatility of our dynamic ledger notion however i will not give you further details on this construction but i invite you to look into the paper if you are interested so with that let me let me discuss in greater detail the first two contributions that i that i mentioned to you and again let's start with with the model itself that we propose as i already mentioned the basic notion is a static notion of a ledger that consists of a pair of a set of transactions and a rank function and on top of that we consider this dynamic ledger which captures the evolution of the ledger in time and it's a time index sequence of ledgers that we will denote as you can see in the picture where LT is the ledger corresponding to time step t in in the visualizations that that i will show you in a minute i will typically depict this a ledger that corresponds to time t so the ledger LT by a line of length t and the reason for this is that one can then intuitively visualize transactions that have a rank at most t as as a point on on this line where the position of the point corresponds to the rank of the transaction and this is this is the this is the the visualization approach that i will i will adopt in the future in the future descriptions and with that i will i will try to to show you how we express liveness and persistence in this formula so let's start with liveness and let's consider a transaction that enters the system at some time t0 then we require that if we look at a particular ledger LT that is that corresponds to a time step t that is sufficiently later after t0 we require that this transaction appears in this ledger LT and it appears there with a rank that is at most t0 plus r where r is a parameter of this liveness property and we also allow this to to only hold with with the probability that allows for an error up bounded by by an error function denoted l for liveness that takes this this size of the window r as its input similarly we also define persistence where if we look at the ledger LT LT plus one and all the subsequent ledgers and we we take a parameter r that defines this window as you can see in the picture and we look at the prefixes of all these ledgers consisting of transactions that have rank at most t minus r then we require that these prefixes are identical in the sense that they contain the same transactions with the same ranks and again this should be true except with with an error that is bounded by an error function denoted by p and depending on the the length of this of this interval r in fact we will call this property absolute persistence and denote the error function pa correspondingly because we also introduce a weaker property that is new to our work which we call relative persistence and which i will tell you about next in the definition of relative persistence we again look at a ledger LT and the subsequent ledgers and we now define two consequent intervals of size r and t respectively as you can see in the picture and we require that any transaction that currently at time t appears in this first segment of the ledger LT which means it has a rank at most t minus r minus s in any future ledger LT prime it will have to appear somewhere within the first two segments of that ledger so up to rank t minus s this means that the transaction doesn't need to be stable yet its rank might change but it cannot increase too much and we also require a dual property which is that any transaction that would appear in any such future ledger LT prime within this first two segments so with rank at most t minus s already has to be present in our ledger LT at now which is at time t this is what is what is illustrated by this second subset relation in the picture and again we allow for for an error probability bounded by an error function pr depending on the on the sizes of these intervals r and s now this might sound as a somewhat arbitrary notion so let me try to convince you that relative persistence is in fact a useful a useful notion first of all notice that it's weaker than absolute persistence and therefore it potentially might occur faster and in our later treatment we show that in the cases we are interested in and actually does however it's interesting that relative persistence is often already sufficient for settlement and let me let me give you some more details for this so let's start with a with a very natural and unsurprising setting that liveness and absolute persistence together imply settlement or if you wish absolute settlement which just means that the ledger up to the transaction of our interest will not change and this is a very simple statement that can be obtained by combining liveness to show that the transaction will be included in the ledger and absolute persistence to show that its position and the prefix up to that transaction will not change however interestingly there is also an analog of this statement working with the relative notions namely so showing that liveness and relative persistence imply a relative settlement which is a term that we introduce and what it roughly means is that the transaction will stay in the ledger and any conflicting transaction dx prime that could potentially overtake our transaction dx in the future is already present in the ledger and note that this is exactly what relative persistence guarantees you namely this second property because as you can see here in the picture defining relative persistence if a transaction is observed by a party to be in the ledger LT and in the current ledger in this first segment then it know the party knows that the transaction might might change its its rank but it will in any future ledger be called contained in this first two segments of the ledger and any transaction that would also be part of this first two segments already has to be present in the current ledger that the party sees and so this is exactly the second property that we want for a relative settlement and a relative settlement actually turns out to be useful because if you consider for example UTxO transactions as being part of the bitcoin system then imagine a UTxO transaction where all of its inputs are already settled and the transaction itself is relatively settled and we don't see any conflicting transaction currently in the ledger then we know that our transaction will never be overtaken by a conflicting transaction and we and the trend and it cannot be invalidated in the future therefore we can already act based upon it depending on the semantics of the transaction so just to sum up dynamic ledger is such a time index sequence of static ledgers with these three properties liveness absolute persistence and relative persistence and being parameterized by the respective error functions and notice that prior work studying concrete protocols show that these concrete ledger protocols under some safe conditions which is which means different things for different protocols but it's typically the on-and-majority assumption bound in network delays and so on both mac and motor ledgers but more broadly wider class of protocols provide exponential security in the sense that both absolute persistence and liveness are provided except with with an exponentially vanishing error term and all these protocols can be captured as such exponentially secure ledgers in our framework and notice that in fact our formalization does not need any formal notion of an adversary and that's because the error functions themselves already cover any any presence of the adversary and any actions or consequences of any actions that the adversary might take under such safe conditions so this this was a brief introduction into our model and with that let me use it to to describe our ledger combiner that provides security amplification and our goal which is latency reduction before giving you the construction let me just briefly comment on the independence assumption that we need to make clearly some independence need to be assumed to obtain some amplification however it seems unlikely that one could assume full independence of such ledgers and therefore we defined the notion of sub-independence where dynamic ledgers are called sub-independent if any subset of them and the collection of the respective failure events whereby failure events we mean a failure of any of the three properties that i told you about have the property that the probability that these failures occur in all of the ledgers is upper bounded by a product of of these failure events occurring in the individual ledgers which are bounded by the error functions i i described to you a minute ago and we also talk about epsilon sub-independence when we condition on an event that occurs with probability at least one minus epsilon in the paper we also look carefully at how epsilon sub-independence can be obtained both in the POS and POW settings let me just briefly sketch that in the POS case one can obtain this property by running parallel Nakamoto style POS ledgers where the liters are sampled from the joint state distribution across all these ledgers but the randomness being used for sampling them is independent for each of these ledgers and the POW case we can leverage the so-called M41 POW mechanism where the miner cannot decide for which blockchain he wants to mine a block rather he needs to make a mining attempt and only if he succeeds he learns to which of the blockchains the block can be can be added and in both the cases these these measures are put in place to make sure that the adversary cannot disproportionately focus his powers on one of the underlying ledgers let me just remind you that we aim at providing a combiner that allows to submit a transaction to either all underlying ledgers simultaneously or just to a single ledger which we call fast and slow submission modes respectively and also all intermediate possibilities are possible of course and this mode of submission can be chosen per transaction by the user issuing this transaction and this offers a nice trade of because typically typically there are fees associated with entering a transaction into a ledger and and this means that the more fees you are willing to pay the better settlement guarantees our construction provides to your transaction so with that let me describe the heart of our construction which is its rank function and this rank function tells us how the rank of a transaction in the combined ledger is derived from the ranks of this transaction in the underlying ledgers it's defined by the formula you can see in the gray box in the slides and i will now give you the intuition behind this formula so first observe that it indeed combines the ranks of the transaction in the individual underlying ledgers where the index i goes goes from one through m which are the underlying ledgers and also notice that the rank function is parameterized by by natural number l i will talk about this role in a minute and there is also a threshold theta playing a role which is simply the minimum of the rank of the transaction across all the underlying ledgers plus this l log m term as you can see in the slides so looking at the at the formula a good a good way to get some intuition about it is to see this as a simple average under the exponential functional x minus rank over l which is justified by the equation that that i show you in the slides that that really positions this rank as such such average and this form of those of the formula is inspired by some results from the theory of regular minimization one can observe simple lower and upper bounds for this rank function namely it's lower bounded by the minimum rank of the respective transaction across all of the underlying ledgers and it's upper bounded by this minimum plus the log the l log m term as you can see now an important role in this formula is played by this parameter l and to be able to appreciate it observe that in fact we put two seemingly contradicting requirements on the stabilization speed that our construction should provide and by stabilization speed i mean how quickly the stability of a transaction grows as a function of basically the difference between the current time and the rank that is attributed to this transaction we require an ample speed up for for fast submissions but also in the long term for slow submit slow submissions we cannot provide more than or better stabilization speed than a single ledger provides and that's exactly because a single and slow submission mode that the transaction is only submitted to a single ledger and so there needs to be a transition between these two two stabilization speeds if you wish and l is the parameter that that characterizes this point of transition um looking ahead we will be choosing l to be proportional to the security parameter so that x minus l is an acceptable error probability and therefore by the time that we transition from the from the accelerated stabilization speeds to the standard stabilization speed we will already be observing an error that is acceptable acceptable to the designers of the construction and this consideration is intuitively depicted in this in this informal figure at the right hand side where we where we plot the somehow the stability of a transaction so the negative log of the probability of a failure of stability as a function of the difference of the current time minus the rank of the transaction and the solid black line describes describes the stabilization speed of a single of a single ledger that provides exponential security where this yeah where this where this exponent increases linearly as you can see and the the dotted black line illustrates optimal amplification where where its slope is m times higher than the slope of the solid black line and the blue line captures what our what our construction provides and we can see that it provides this accelerated stabilization speed at the beginning and later the stabilization speed converges to that of a single ledger as it necessarily has to do to the observation that I just gave you so this is this is in some sense the best one could get and this is also what our construction provides let me also mention this this threshold theta the reason we have it in the in the formula and we only count contributions from from the ledgers in which the rank of a transaction is not too much larger than the minimum rank across all of the ledgers is that the absolute persistence property is sensitive to small changes in rank and such a cutoff point guarantees that eventually even absolute stability is achieved and finally I will just mention passing that we actually in settings where we do have a conflicting conflict relation on the on the space of transactions that are being considered this is something that we formally described in the paper we consider a preemptive version where the contributions are only counted from ledgers where the transaction is not preceded by a conflicting one and with that I will I'm now able to formulate our main result for this for this combiner if we take and this result shows that if we take m dynamic ledgers d1 through dm with exponential liveness and persistence absolute persistence guarantees that are sub-independent and operate on on transaction space with a conflict relation then this construction that I just described to our combiner achieves both I mean can operate with both in both faster mission and slow submission modes and in the case of fast submission the probability that the transaction is not relatively settled after two r time steps is upper bounded by these two exponential terms where the first one captures the at fault speedup and is dominant at the beginning after the transaction is submitted and later the second term becomes dominant and this this captures the standard settlement speed that is achieved later and the construction also can be used for slow submission as I already said and there the probability that the transaction is not absolutely settled after two r steps is upper bounded by this exponential term providing the standard settlement speed with this with settlement times multiplied by this additional factor of log m as I mentioned earlier and let me conclude by formulating the corollary of our fast submission result which says that if we take a number of chains that is proportional to the security parameter then as we already observed our combiner with the with the fast submission mode achieves constant time settlement except with a negligible error so these these are the main results of our work with that I would like to thank you for your attention and I'll be happy to take questions. Thank you.