 Welcome to the second day of the winter school. First of all, I apologize for the delay, but that's what happened, but you have four passengers and the speaker in Greek. Actually, there is traffic, that's all. So, we are very happy to have a speaker, Professor Agnus Gaias. He is a Professor and Chair of Cybersecurity at the University of Edinburgh, and an expert in blockchain and program security of the blockchain system. Before he scanned the position, he was at the University of Athens and Yukon from the US, and we are very happy to have him here. He talked to us about proving the security of blockchain. Alright, so thank you very much, thank you very much for the kind invitation. I'm very happy to be here and talk to you about the proving security of blockchain protocols, which is a topic that I've worked in the last few years with a lot of enthusiasm. So, let's see if this works. Yeah. So, if you want a topic for the grand time of the year, it is foundation of blockchain protocols. We'll try to understand the fundamental security of blockchain protocols. What are they? How can we prove them? So, proving security requires to define a program design model and then give mathematical arguments why a blockchain protocol satisfies the properties that it's supposed to satisfy in this model. So, before I start discussing about blockchain protocols, I would like to take a few minutes to give a little bit of perspective about how we do that in general, because that is going to be a good inspirational stocking point that will guide us through this presentation. So, when you build a secure system, you have an objective and this objective has to be precisely defined something whose importance cannot be understated and that's something I will come back to it in the course of the presentation. So, once you have the objective, then you can ask what are the resources that can be utilized by the parties that are actually going to be engaged in a certain implementation that attempts to meet that objective. At the same time, you define the threat model which tries to capture precisely how the adversary should operate and furthermore the threat model should be generally enough that captures all possible attacks that seem to be relevant for the practical deployment of a certain system that meets that objective. So, given those a candidate solution is provided, is designed, is laid out and then using a set of assumptions we attempt to provide a security which establishes that the candidate solution meets the objective in the threat model under the assumptions given assuming that the resources that have been possibly are available to the parties that are running for. And once we have this we can start asking questions which will enable us to criticize how well we have achieved how well our candidate solution has achieved its objective. For example, are these resources indeed available? For instance, the resource which is frequently assured in many protocols and the existence of high quality private preparedness to the entities participating in the protocol is something that frequently is assured in protocols but at the same time when security problems arise it's one of the possible attack points. Is the threat model realistic? Are these really open attacks or is it in practice that there are other attacks that are more relevant in certain ways the expression that defines our adversarial threat model? Are the assumptions plausible? Do they actually hold in reality? And finally, even if you have all those you can start asking is this solution that was created very efficient? Because it could be the case that there are different solutions that meet certain objective under different assumptions and choosing the right one finding the right one and understanding completely the design space is an important integral in this science of computer security that we are describing. Just make a stop without going into much detail I would like to appreciate what does it mean to have an objective? If we say we have an objective would you not mean something which is vaguely described as something like that Alice involved wanted to communicate security? In cryptography over the years we've developed very precise ways to describe what is an objective and this is Duster Snapshot which is not supposed for you to read but it will just give you an appreciation of how something which is vaguely described like this is casted in high precision in terms of pseudocode and I should say in this definition which is the definition of a key exchange protocol is just the tip of the iceberg of what happens in those papers that are technically properly captured the objectives that are of relevance for various different security applications that we care about. So how good are we with this? So I'll just give you Secure Channels and I'll try to map it to the previous road map Secure Channels as you obviously know is one of the most important problems in computer security in cryptography I mean this is the problem the first thing that we heard about when you started attending or studying the area so what we want is to build this Secure Channels between the two parties and this problem was put forth at least in its modern form in a serial paper of David and Helman in 1976 and since then it's interesting to observe that our community has been actively researching how this objective can be reached following the road map which outlined and I'm showing you here a specific instance so over time here are resources that we have isolated us the ones that we would like to have available to the parties there is randomness available there is a PKI for some way to authenticate channels and this PIP protocol has another line of communication infrastructure so these are the resources that are available to the parties that are engaged in the protocol there is an active minor middle attacker as part of the threat model which means that the attacker is active in the sense that not only is the attacker eavesdropping what the parties are communicating to each other but actually attempts to interfere so the David Helman protocol in 1976 I was the first description of how this objective has been reached and it's interesting to see how more than 30 years later there is still like efforts in specifying fully this protocol that's in the form of TLS 1.2 you may know now that TLS 1.3 is underway in here and we are still like looking into how these protocols are defined and specifies the first security proof starting in the early 80s for secure channel protocols coming with Dole and Yao took us again 30 more than 30 years and that's from a publication just ago now in crypto where for the first time a complete proof of protocol as implemented was given here is the assumption the decision of the Helman assumption is an assumption that can be used to prove security of that so this just gives you a glimpse of the breadth in the sequence of time that it takes if we understand the problem it's seemingly as simple as a secure channel so what about the bitcoin blockchain as an objective how can we map it in this picture so interestingly the first thing we have about bitcoin is actually not the objective but it's the protocol itself so the solution present itself first the resources that are available are also not completely spelled out but it appears that we are in a setting where we do have a protocol which is not authenticated and it's not reliable not reliable here means that it's a broadcast that can be manipulated by adversaries and provides divergent views to different parties that are on the receiving end on this protocol the thread model states something about the hashing power of the adversary also vaguely stated and the objective itself isn't clear but it should be like something that has to do with the reliable record of transactions finally the basic assumptions have to do with the properties of an underlying hash function and also do which are some of them we know collision resistance but the others of which are much harder to define like how well for example 756 defines the proof of work and what is a proof of work exactly in this context probably the security proof itself is elusive so this is like basically the picture as it was a few years after like a model it should use bitcoin and as bitcoin started to become very popular the more and more people started to look at it from a formal security point of view and of course the parallel that was there from the beginning was consensus because consensus contrary to bitcoin blockchain is a problem that we understand well in computer science it's a classical problem at computer science it's one of those problems that illustrates the beauty of computer science like few others it is not a number theoretic problem it is not if you want a mathematical problem in this thick definition in this thick sense it is a problem that has to do with information and a problem that has to do with the ability of parties but while they are decentralized and they are operating in the presence of an adversary that attempts to confuse them how is it possible for those parties to reach a single view this problem was introduced in the eighties and there is still understandably a lot of interest in it that has been re-vigorated I should say with the introduction of bitcoin so what is the consensus problem so if you would like to see the objective of consensus here is a very simple single slide over here so we have a number of parties and let's say they would like to decide on the simplest possible form of information like just one bit they start with their own inputs which are there that attempts to insert a bit in the protocol and they would like to conclude on the private being which should satisfy the following when looking at these properties even though they are informally stated here you should understand them in the presence of an adversary that controls a subset of those parties running in the protocol so you have to violate those properties so the first one says agreement agreement basically means that all the parties should output the same value second property all of these parties have the same insert bit then this should actually be the output they could use and finally termination states that parties should actually terminate all of them it's interesting to point out that the conjunction of those three properties is what makes the problem non-trivial actually if you for example I do not want to satisfy validity it's extremely simple to satisfy the other two properties just write the protocol that outputs zero this protocol clearly satisfies agreement and termination nevertheless it's also a useless protocol there is nothing about the input of the parties that is present in your album similarly if you would like to just satisfy validity it's extremely simple to write the protocol that does them just have every party output it's input the output is clearly one of the inputs if all the honest parties agree however if the honest parties disagree at the beginning their agreement is my only of course if you forgot termination the protocol doesn't terminate and it's also equally not useful so here is the conjunction problem and it is a very interesting problem to solve with a protocol when the parties are operating against an artist so there are many ways to solve it that we started over the years and here is an exemplary instantiation using the dollar strong protest problem so if your technical consensus and suppose you have authenticated point to point channels then suppose that the adversary now commands the minority of resources which resources here should be interpreted as the number of artists running the protocol now there is a very simple reduction that if you apply this dollar strong protocol with the full times and you assume that digital signatures are secure you can actually get proof that they solve consensus in a synchronous model now this is not the only way to solve the problem and we actually come a long way and we are now able to solve it in various ways that actually improve on many of those assumptions but the point is that consensus has been looked at from a formal security point of view as an objective so why is this not good for us it seems that the people at blockchain try to do something similar so we can just run consensus let's say in all the transactions and then we are going to produce an undisputable public where we will be able to refer to all the things that for all the things that the parties are engaged in the problem the problem is actually this part so assumptions here is that the resource available to the parties is the fact that they are authenticated point to point chance while when I was describing Bitcoin as an objective it was an unreliable of any kind of progress so these two things are quite different and it's quite if you want also interesting and surprising at the same time that one of the most interesting developments in distributed systems over the last 10 years has been essentially ignored as a research area from the vast body of literature in distributed systems in the theory of distributed systems because mainly the vast majority of papers that were looking at the consensus problem were not looking at this setting interestingly the setting even though there were glimpses of it in the area of work here in 2008 was not given any serious attention until the end of the Bitcoin blockchain go so what is the objective of the Bitcoin blockchain go and when I say now what is the objective I don't mean it in a high level way but I mean it in a as precise as what Michael is precise as possible so that we are able to make formal statements about whether a given protocol satisfies the objective so here is the letter of technique which for the first time was formalized in joint work with Wangarai and Yves Bernados which you can find on the Yves Bernados ICR archive and in the same work we also proved the suitable abstraction of the Bitcoin protocol but we turned the Bitcoin back and realized this also in the same work we discussed how using that protocol we can achieve other premiums such as consensus and when this requires an effort it's possible to do so I'm going to give you an overview of this work describing the rationale the way we defined the the protocol and the rationale behind the modeling the security proofs and I'll give you also a complete description of the proofs of security that appear in that paper so let me start by going over the objective look at what we say that a protocol defines a letter what are the properties of that protocol so let's make first some simplifying assumptions and these are assumptions that are helpful when you try to provide a formal the assumptions that we made here when we worked with this is to consider synchrony in the following sense time is divided in rounds so you think of the execution of the protocol as taking log step parties are engaged with other parties in a synchronous fashion they deliver messages to the network or produces these messages back to them I will show you more details about this modeling in a moment but just keep that in mind I'm going to do it this way now that I am introducing you the two basic properties that a letter should satisfy so the protocol organizes transactions in a sequence of blocks in other words set of transactions appear in blocks and blocks are connected to each other in chronological order so even that these two observations let's define these two properties persistence and blindness what I would call a robust transaction legend as before you should imagine these two properties as happening in the presence of an adversary that would like to violate so persistence talks about transactions themselves and how permanent they are standardized by natural number K and it says if an honest party reports a transaction TX as stable and here stable is a moniker which is given to this assessment which says that the transaction is in a block which is K blocks deep in the ledger of the honest party in other words the transaction has a number of blocks in chronologically on top of it so if an honest party reports a transaction as stable then whenever an honest party reports the same transaction as stable it will be in the same position so this property tells you once you heard from one of the parties running the protocol that a transaction has been standardized and it's given a position in the ledger it will never from anyone else that is reporting that transaction as stable where again stable here means that it's more than K blocks deep observe that this property by itself doesn't say much it's very conditional it says if in the same position in the same position because transactions are in a sequence of blocks every transaction can be thought of having a position like a certain block identity and position inside the block from the Genesis block so observe that this position is only guaranteed to be the same once one of the honest parties reported as stable so it could be the case that honest parties change their mind if the transaction is not stable so as I said this property by itself doesn't say much because it's very conditional it says if one does something then something else happens so it could be the case that nothing ever happens so we want to bear this property with some other property that we call likeness and basically tells you that things eventually happen that good things eventually happen well this follows this tradition in distributed systems where resistance tells you bad things don't happen and here likeness tells you that good things will eventually happen so what are the good things that will eventually happen so likeness is parametrized by you and K of two parameters and it says the following if all honest parties attend certain certain transactions in the ledger and they continue to do so for a certain number of rounds which is that parameter U after the passage of this time all the honest parties would record it as stable and would always continue to do so when the protocol continues its execution so what things don't happen parties don't disagree about the position of stable transactions and good things do happen eventually the parties all honest parties would call it a transaction in a statement together with this you can see how the parameter U captures transaction processing time in a very broad sense it basically says how long you have to wait for a transaction to stabilize in the network and in a security proof you would expect that U would be further specified and it's expected that will be a function of K and potentially of other parameters it's a little bit unclear here what is the value of the parameter K but why K exists and the relevance here of K is that the way we're going to establish these properties will be in a probabilistic sense and therefore there's going to be a probability of error which we will show to depend on K and it will be negligible in K in the sense of an inverse exponential so here's the letter objective as we defined it in the TKL paper and then the question was how is it possible to argue in some way that the Bitcoin protocol meets that objective so let's go into more details about how we can describe the protocol in a way which is sufficiently precise to prove that it meets that objective so I will introduce here what I call the synchronous model for this class of protocols and this will enable us to provide a precise description of the protocol so you have to think that in this synchronous model the time is divided in rounds and in each round each party is allowed a certain number of queries to a hash function this parameter is called Q and the hash function itself is going to be modeled as a random function what the protocol is called a random model which is an assumption used in the security analysis of practical protocols it is arguably a simplified model in the sense that the random function itself is assumed to be external to the protocol while any actual implementation of the protocol will have to instantiate the random model with a specific hash and of course here not a grain or salt but actually a big chunk of it of whether your instantiation of the random model with an actual hash function is something that will preserve its security properties removing the random model from security analysis is an important endeavor but clearly it will be for this analysis a future step so messages are sent via a diffusion mechanism and all of this is diffusion mechanism to distinguish it from broadcast and try to capture in a simple way the properties that you might get from a peer to peer transmission of messages where basically you have something like a broadcast but you have no way to authenticate the source you have no way of guarantee whether all parties have exactly the same view of messages that have appeared in a single round to facilitate this manipulation that can happen in the way messages are delivered we use a notion of an address I would call rushing and it's already very well established in the body of literature of the modern cryptography that studies multi-party protocols essentially you can think of the diffusion mechanism as a storage box where parties insert their messages that are due for delivery and you can think of the adversary which is rushing of having the final say in every round to go to this like if you want mailboxes that are waiting for the parties to access them and inject messages reorder messages and spoof their source we do not assume that all messages will be eventually delivered nevertheless the order or the actual number since the adversary can inject messages will not be guaranteed so this already rules out a wide class of consensus protocol that actually do look at the number of messages received and try to do sensible decisions based on how many of them are equal or different and so forth that's more than now the participants so there are a minus c on the parties running the protocol and is the total number of parties each one is producing is going to be producing Q queries that has function the adversary on the other hand is going to be controlling T of the parties you can think of it using now the terminology from Bitcoin acting as a single of issues might be operating against the protocol we're going to assume as you see because each on this party has Q queries to the has function this is like a flat version of the world in terms of has to be found so obviously a flat version of the world is not the actual world but as you can relatively easily see it is like a worst case so essentially if on this parties themselves they aggregate in mining rules just because they are honest like this just makes things better for them actually the worst possible case is where you have this complete decentralization from the on this parties perspective and a single big mining rule acting in the control of the others so this is the point that is made here observe that this feature from the fact that our analysis is what we might call the traditional cryptographic say bad guys versus good guys instead of a number of rational articles running together and the rational aspect of these protocols is extremely interesting and I will also hint at that in other group presentation this afternoon the rational of these so how do we orchestrate the execution of the protocol so suppose we have some which are not now yet specifying so here is the actual protocol and we would like now to understand what is an execution what do we need by running the protocol we would like essentially to simulate the real world in a certain way which is controlled and specific for us so here is the protocol and you can imagine an adversary in an environment you can think of the adversary in the environment as two parties that are acting in tandem there all of them you can imagine as probabilistic polynomial time machines with the environment being the one is the initiative of all the parties running the protocol including the adversary itself the environment is the first party to be initialized and the first party to execute and it is the one that is going to spawn the other parties running the protocol as well as the adversary and it is going to create them under six conditions we are going to force actually any parties are created and all of them are running in the instance of the protocol now the view of an execution is the concatenation of the view of each party at each run that is running the protocol the whole execution terminates when the environment terminates and we will be interested in properties of this view when we run this execution a key point is that there is going to be various random variables of interest quite importantly there is going to be one that controls the random aura as well as all the points of the parties what is very important that is happening here is that now we do have a random variable which is completely defined and we can try to express the properties that I have defined before persistence and libraries for instance as predicates on this random variable so we could look at the certain sampling of this random variable and it would say this is a sampling that violates persistence or this is a sample that violates libraries so this is the big benefit that we get from following this approach which is firmly based in a long line of previous works that tried to formalize the meaning of security for multi-particle cryptographic formulas not only secure multi-particle notation let's get a closer look at how this execution is going to work and I'm going to give you like a view of this execution at the end of one round the beginning of the next round let's say round R so let's start at the beginning of the round perhaps at the beginning of the round is the environment is providing inputs to the partners those inputs for example can be sets of transactions that are to be included in the letter but you don't have to restrict yourself now in this interpretation even though this is the one that we're going to use we're going to describe formally the vehicle of problem so the environment here is providing input to the parties running the problem and the parties will actually perform the steps of the problem the parties will have access as you see here to the hash function which will be modeled as a separate entity and it's going to be it's going to be unproractable from the point of view of the adversary both the users and the adversary will have access to the hash function and when they query the hash function they may have interesting answers they may be comparable those interesting answers some of the parties when they get such interesting outputs will put them in this broadcast diffuser mechanism and it's going to be manipulated by the adversary so at the end of this round what's going to happen is that the parties will be executed in sequence they're going to ask the hash function deposit their messages to the broadcast functionality and allow the adversary to manipulate those messages as well as ask the hash function itself note that this is not exactly how it's happening in the real world where things are actually concurrent but this the difference here makes only things better for us we just serialize the whole execution so the concurrency of the real world here is serialized in an execution where honest parties are executed in sequence and the adversary is given the final say in every round some take to certain restrictions like for instance it is not allowed to drop messages so for instance here the yellow box which is discovered and broadcasted by one of the honest parties must be present in the incoming tapes of all the honest parties in the next round the adversary may of course inject its own messages it may inject them in arbitrary order it may even omit them from the incoming tapes of some of the honest parties in particular this shows you in illustration graphically how the honest parties will have a divergent view about the messages that are floating in the network in a single round so here is how the round structure of the model works so once we have this I would like you now to be assured that this can be defined precisely in the sense that we do have this round variable and now it is starting to define the property of a protocol so here the property of a protocol is defined in an abstract sense as it predicate the property is a predicate which has this input the view of the protocol so a protocol by number of parties has property Q which is an arbitrary predicate with error epsilon if and only if this happens for all adversaries for all environments the probability that the view of the protocol in the execution with the adversary A in the environment Z satisfies predicate Q is very close to 1 by the small epsilon error to some security parameter so here all this formalism I introduced so far enabled us to describe precisely properties of protocols and when they are achieved now once we have a protocol express properties of the protocol formally and try to prove them by writing a theory that establishes this path what makes it difficult from a security proof point of view is these universal quantifiers that you see here essentially we have to prove that no matter what the adversary does and no matter how the environment acts and provides inputs to the partners the protocol will not be violated with all by small epsilon so this model is quite general and the thing is that even though we are in this synchronous setting the generality of the model enables us to capture many ill behaviors that happen in the real world set for example imagine that some parties will receive only some of the messages which is something that happens frequently in reality that's fine we can simulate that by having those parties integrated as part of the adversary since we have quantified all possible adversaries and have the adversary drop some of the messages that they receive so of course there is a large mining policy where someone performing some type of selfish mining for instance which is an only fact against the Bitcoin blockchain protocol it's also something that we can express in the protocol and of course any combination of that this is the big part of DAF tax from having a universal quantification of all possible adversaries in an expressive model like the one I have shown is that you don't have now to look at specific tax but you can argue that your protocol works against all possible attacks within your model given the model is expressive enough you are able to extract meaningful statements about insecurity so let me now come to the protocol because so far we are talking about a model that within which we can express the protocol and its properties and now it's time to come and describe the protocol so obviously even though the model is expressive enough there is no way to take directly the Bitcoin implementation and describe it in this model even though this is an ultimate goal that we surely hope at some point to reach we are not there yet in the image of secure channels that I showed you in the beginning of the presentation from the Dole of Yao modeling of secure channels to the security proofs that TLS about TLS there were more than 30 years of active research that took us from an abstraction of the protocol security proof of an actual implementation hopefully it's not going to be 30 years for Bitcoin but clearly we need to start from somewhere and we have to start from an abstraction and this abstraction is what we call the Bitcoin backbone protocol because it's a simplified algorithmic version of the Bitcoin and that abstracts away many aspects of the actual implementation while it maintains if you want its consensus ledger like code or at least so we hope so another important feature of the way we describe the Bitcoin protocol and also explains the name we chose for it is that we make a conscious distinction between the data structure which is the blockchain and the application layer which is the transaction so we wanted to remove the transaction from the equation we felt that what is here more relevant is to try to understand the security properties of the data structure and try to remove the transactional layer not as something that is not important but as something that has to be studied separately on top of the analysis that we can do focusing only on the data structure moreover we felt and it was a step in the correct direction that if we did that we would just study the data structure there could be other things that could be solved using the same data structure operation and this was the motivation for doing this separation calling this the Bitcoin protocol as we will see we will use this Bitcoin protocol to solve other problems and not only consensus in this set so the protocol itself is the one you are familiar with or at least an abstraction of what you are familiar with what is important to keep in mind is these three functions V, I and R which are going to be abstracting away all the application layer aspects of the problem just to give you a feeling of exactly how we define it so we have two hash functions G and H in the execution they are going to be one of those rather large and players will have a state or it's city and it will be in the form of a blockchain it's going to have the following structure in every block there's going to be three elements S, X and C the R S is going to be a hash of a previous block there's going to be some input that is otherwise left unspecified in the case of Bitcoin this input for example might be a set of transactions but we're not interested in actually specifying what this input is and in fact it might be different in other applications sitting out on the other hand is something which is not to have a specific function at least at the application layer but it's something that when we hash S and X together with G and then this output of G together with C, D, R we get a hash value which is less than T now T is a parameter of the protocol which is called the target and this is what specifies the proof of work aspect each block so now you see that we can change blocks like that together and what is the output it's a blockchain all that is anchored is just being an empty thing it's not of interest of what are its contents as we're going to be analyzing the protocol in what is called the standard logic setting the contents of the blockchain are defined by these excite values which are present in every block and themselves in order to satisfy the validity they have to pass this predicate fee so this is the first specification we're not interested in defining the way V works in this in this high level description we just want to say that the chain is going to be valid as long as it has this structural property that is defined here and at the same time the validity predicate is satisfied so what happens within every round players will obtain this special input from the environment which you can think of it as a symbol that instructs them to insert a certain value X and given that value X given that input from the environment it could be multiple such symbols they will pass them to this capital I function which will use all the local information including the blockchain and perhaps a private or public state that the player may have to use the input X so basically this value is going to be processed by I and prepared for inclusion in the blockchain it could be the case that I itself is just the identity function nevertheless in all the applications we have we have some strict requirements about I that include as the necessary minimal requirement that I introduces some entropy so the minimum output here for I is going to be the input X plus a sufficiently long round of the nodes subsequently the parties will use the equities to the hash function to obtain a new block different CTR values starting from 0 once they found such a block they will do a transmission which I'll show you next so let's say we have a player here which finds a new block that extends it once this is found the new chain is going to propagate to all players via this diffusion mechanism we underlie the other problems finally in its round player is going to be married in its blockchain at the beginning of the round every player is going to compare all incoming chain with a longer chain and if a longer chain is found it will be adopted instead previous chain is going to be ready observe here, a longer here works because I'm analyzing the protocol in a static setting where the same target is used throughout the protocol execution finally and that's the last function R which is left unspecified a player when it's given a read symbol from the environment is going to process its blockchain according to that again we do not care to specify how the player reads the contents of his project so here is the actual pseudocode which describes whatever I say and even though I'm not going to go over it I think the value of this is to say that this is a precise object the theorems that we're going to be proving are going to be with respect to that given pseudocode description that fully determines how the protocol operates so here is the validate predicate that checks whether a chain is valid and that's the things that you would expect from such a predicate it checks that all blocks are valid and goes in a big repeat until loop that checks that the whole chain proof of work, pseudocode that is given some input X to insert in the chain and then goes to a while loop that attempts to find the proof of work and finally this is the main loop of the protocol that's what it does in perpetuity is that it picks the best chain from the ones that it sees in the network in its local one it determines the next input to be inserted it tries to find a new block to extend its local chain and if it does so it transmits it to the network it's almost done with description of the protocol there are only some requirements that have to be added first one is that input function I should produce inputs that are acceptable according to be and otherwise that's a minimum requirement like if I produce things that make no sense and the protocol would actually do no progress and another thing is the input entropy the function I on the same input should not produce the same output even though we are in the random model like it's very easy to prove this property of input entropy by assuming that the random nonce is part of the output of I in practice there are many ways that in the actual protocols like a random nonce is included and in random model it's very easy to argue it's going to be very small probability that the parties would choose the random nonce twice it's going to be also small probability that the g function would output those values would output the same value these things would amount essentially to a hash function provision which is a small probability thing in the case of random oracle it's something that we don't believe to be mounted at least this is the hash function that we can't even secure if you decide to say it was produced and it would be controllable so going directly to prove now resistance and lightness it's a difficult task we need intermediate targets we need to understand the data structure bit more in order to be able to prove formally our final project for this reason in the TKL paper as well as in follow-up work by myself and yours Panic Tacos we could use properties that would be used as intermediate goals that help one lay out arguments towards the objective proving the letter protocol robust these properties are a little bit closer the data structure that is maintained by the partners while resistance and lightness referring to the letter in a fairly abstract way talking about blocks that are chronologically ordered these properties now that we would introduce they actually talk about the blockchain in a more intuitive way and they are the following three prefix, chain quality and chain growth prefix informally says that if two players proved a sufficient number of blocks from their chains they would obtain the same prefix chain quality states that any large enough chunk of an honest player's chain will contain some blocks from the honest players so the chain of any honest player grows at least at a steady rate we're going to call this the chain speed coefficient I'll come next and give you exact definition of the protocol so you can think of this as a convergence agreement type of property honest part is during the execution of the protocol they will disagree their views are going to be divergent and when I say views now I'm referring specifically to the data structure which is the only state that they maintain as they execute the protocol so you can think of the view of the protocol as the combination of all these blockchains the question is how different those individual views are the common prefix property says that if you take all these together map them to a single one in let's say a combined overlay view the honest part is I'm going to have a big common prefix and they're all going to disagree in the last few blocks here is the formal statement of this common prefix property this is the predicate that formally defines common prefix in an execution it says for all rounds are one or two with R1 less variable R2 and two parties two honest parties P1 and P2 with two changes what happens is that if we prove K blocks from change C1 we will find ourselves into a prefix of change C2 I'm using this notation here for prefix prefix here should be understood in its literal string theory so this version of common prefix which I'm stating here is a strong version of common prefix which we introduced in the 2016 originally the original version of common prefix was the same but not requiring R1 to be less variable R2 so it was referring to the same round this had a certain artifact in our presentation as we did not provide a black box reduction of persistence to the common prefix property which is something that might be desirable from the aesthetic point of the mathematical presentation in a nice work class Seymour Mansellar highlighted this in proposed consistency to achieve the black box reduction well at the same time we introduced this strong common prefix property to achieve the same black box reduction which is a proof that I will be presenting to you today change C1 blocks going to be adopted by the partners so the change C1 property talks about the chain of one of the honest partners it asks how many blocks are present from the partners around the corner in the blockchain for honest part and specifically to balance the blocks that the adversary inserts in the blockchain to the blocks produced by the other partners the chain property which we introduced in the ZKL paper has a parameter mu parameter k and it says the proportion of blocks produced by the adversary should be less than mu times k that is a sequence of blocks consecutive blocks as we say and the proportion of blocks for the adversary should be less than mu parameter k mu here returns should be a value between 0 and 1 and we expected this value to somehow relate to the adversarial hashing power finally the chain growth observes the honest player's chain between two rounds and it argues that the chain actually grows as the protocol exhibition advances specifically it has another coefficient tau and it says for all two rounds an honest player P that has these two chains in those respective rounds in the two rounds sufficiently far apart time wise then the chain of the party has grown by a certain amount which is tau so that's the speed property was introduced in KT-15 it was implicitly present in ZKL but only in the form of a lemma as opposed to a separate property so here is our full strategy here's what I'm going to focus on for a good part of the remaining presentation so this is the proof that the Bitcoin barcode is a robust transaction letter first we're going to define the notion of the typical execution which is something which is going to be extremely useful for probabilistic arguing about the problem then we can argue that typical executions happen we have a one row ability and then we're going to prove chain growth comparison chain quality using these three properties then we're going to derive resistance for those three properties and like this so that's going to be a good chance for the remaining presentation before the break I hope I'll try to finish the first step which is define what are typical executions why they are important and prove to you that they happen if you remember a lot of these properties that we're going to argue they're going to be shown in the probabilistic sense so let's take S to be a set of consecutive rounds let's define these three random variables which we're going to use heavily in our proof strategy first one is X of S which is the number of successful rounds a successful round is a round that an honest partner creates a block that's called a successful round maybe more than one honest partner created one that round is also successful so here is a successful round rounds that are not successful have the property that honest partners are silent they're silent because they have nothing to say they have produced nothing so here is X of S another random variable which is the number of uniquely successful rounds those are very interesting as well and you can think of them as a more strict notion of a successful round a uniquely successful round is a round where one party exactly produced and why are two random variables that have nothing to do with the honest these only refer to the honest parties Z of S on the other hand is a total number of proofs of work that were computed during the sequence of round S by the other side so Z has to do with the other side X and Y with the honest partners in other words X is going to be bigger than one now why these random parties are important will become clear in the sequence of this analysis let's go a little bit deeper and understand exactly what's going on let's start with F one of the most important parameters is the probability that at least one honest party finds a proof of work in a round given that the target is T and let's say Kappa here is the range of the hash function the probability that nobody none of the honest parties finds a proof of work can be seen easily to be like this the good thing here is that we are in a random model and you can think that every time an honest party attends so the proof of work is essentially trying to put the ball in a basket the probability of healing the basket with the ball is like this and this is the total number of what tends that the honest parties get collectively since they are executed in the wrong one minus this is going to be the probability that finds a proof of work let's call P to be Q over to Kappa just to avoid mentioning these two parameters all the time and then we can do like a single calculation to show that F is going to be equal to that person the good thing here is that we are going to be assuming to operate in a setting where T over to Kappa is going to be sufficiently small and without those in generality I am going to be F waiting F with this expression what about the probability that exactly one party finds a proof of work so exactly one party finding a proof of work can be bounded in the following way so here we have the probability again that one party finds a proof of work so here is the sum and all the other parties fail this is only a lower bound an estimation and we can further bound it from below in the following way what is interesting here is that the probability that exactly one party finds a proof of work is quite related to the values that we have before particularly F it has this one minus F so basically this says that as F gets closer to one we are going to have a diminishing of this probability and it will turn out that this is one of the very important aspects of this analysis the F value and maintaining the F value at the choosing the F value at the appropriate point is something that is critical of both resistance and values with a little bit foresight F should not be too high or too small so let's now get back to this X, Y and Z and see what are their expectations the nice thing is that they can be expressed with this values which are already introduced that's expectation of X that's expectation of Y that's expectation of Z over a sequence of rounds X, S an important assumption that is going to be recurred in our analysis is that A minus F over T is bigger than 1 plus delta and you can imagine that this is a an essentially honest majority assumption we do not have exactly 50% because none of our theorems would work like that but what we have to assume is that the honest part is why the number of malicious part is they have an edge and this edge is this delta this could be a small real number very close to zero once we make this assumption we can extract this lower bounds for X and Y these lower bounds are extremely why they're extremely important? because they relate the expectation of X and the expectation of Y with the expectation of Z frequently in the security proofs that you will see there is going to be a tension between X and Y and things will be working in our favor if X and Y minus over come Z so it's good that under this assumption the expectation of X and Y becomes bigger than Z multiplied by the factors that you see here so let GABA define the security parameter and here is an introduction of what is a typical execution so a typical execution is an execution where all these random variables X, Y and Z are well behaved with respect to their mean so typical with parameter epsilon if for any set of rounds that are sufficiently long it happens that the XYZ parameters are in this way related to their mean specifically X is bigger than 1 minus epsilon its mean Y is bigger than 1 minus epsilon its mean and Z is smaller than 1 plus epsilon its mean notice that we have lower bounds for X and Y and upper bounds for Z and these are multiplied with this epsilon factor so that's a typical execution and furthermore another aspect of the typical is that no collisions or predictions take place against the hash function a prediction is a special event for a hash function that says that you can predict the output of the hash function at few to a round where the hash function is fed a highly rounded input so this is the notion of a prediction which is also a low probability event exactly like a collision so and I think I might be able to do this before we go on a break typical executions happen almost always so the thing is that if you start the protocol and you run an execution which is a random variable how are you that with overwhelming probability that's going to be in our security parameter kappa random variables are going to be well behaved and no collisions or predictions are going to happen I'm going to focus on these statements and I'll argue to you that this is not very difficult to do the good thing is that based on the way we have laid out the way the protocol works the operation of the honest parties and the adversary and the way they interact with the random variable x, y and z are binomial distributional and therefore what the picality asks are really concentration questions about three binomial random variables the good thing is that we have a tool or tail bound to the binomial distribution called terminal bound which you could see what's the table bound of it and you want I think this is the combination of the combinations of that slide so say suppose that you have a violation so the violation would be the existence of a sequence of possibility rounds that either this happens like x is not too small basically the honest parties are unlucky in one of this way or the other is too lucky so these are all binomial and that means basically that they are far away from the the good thing about binomial is that we have a tail bound which is very good and you can see it here so if x of a null distribution in mu is the mean the terminal bound states the following regarding the variations of the random variable from the mean where delta here is a value between 0 and 1 essentially what the terminal bound tells you is that if you have a binomial you have this the terminal bound says that these tails so-called tails of the distribution you can bound them their surface is exponentially bounded by this expression which relates to the mean the big power of this is that in many common settings including the one that we are describing here for example binomial naturally arises a sequence of n independent binoditrials that can sum them for example like sequence of n point flips is a common example that are independent what happens is that the mean is n times p where p is the probability of a single success and the good thing is that this n is going to appear here at the exponent which means that as n grows the smaller exponential the smaller this bound to begin so how is this relevant for us is the fact that this value s here is going to be omega of kappa and what happens in this terminal bound is that the means of all these variables as you see they are linear in s so you see linear in s linear in s linear in s s is big omega of kappa which means now that we have to apply the terminal bound we are going to get this value s here it means that the bigger is the sequence of rounds the closer to the mean you are going to find these three random variables the nice thing about this notion of typicality which is something that we developed as a proof strategy not in the original version of the paper but as we studied the paper as we studied the problem further the last few years is that is able to pack the probabilistic argument into single statements the benefit of this is that I will not have to return the probabilistic arguments in the remaining of the security all my probability requirements and needs are packed into this typicality argument and now conditioning on a typical execution I am guaranteeing that everything is going to be sufficiently well behaved and from that point on in the second part of the talk I will go on to the proof which is not going to use probability anymore but just combinatorial arguments about how the problem passes so I will stop here and have a break so in this second part of this lecture we covered number one and two here what is a typical execution and not be a typical execution of probability in this second part we are going to start going to the properties chain growth common prefix chain quality then we are going to derive persistent values so this will complete the security proof the critical background protocol is a robust transaction ledger and after that I am just going to start discussing extensions of our model and also questions for future prep it is by far the property so let's recall the common prefix says that for any two rounds R1, R2 R2 parties R1, R2 but are active in this round with chain C1, C2 it happens that if we prune K blocks from chain C1 we are going to find ourselves in a prefix from C2 observe that there is no requirement here that P1, P2 is different so it could be that we are talking about the same part how once parties change both over time that's why this universal quality product also applies P1 so the proof is going to be by contradiction so let's imagine a situation where what we postulate it doesn't work let's examine we have these two rounds R1 and R2 R1 being less or equal to R2 and we have to change C1 and C2 present in this round in the individual view of party P1 and party P2 and what happens is that chain C1 you see here has diverged from chain C2 the last common block has a timestamp R0 and there are more than K blocks here because we are in violation of this condition and if we prove in K blocks from C1 we will not find ourselves to a prefix of C1 so there is more than K blocks in this part so this is a violation so let's take the last honest block that has a timestamp R star and it's in the common prefix of C1 observe that this may not be this block it could be the case that this block that you see here is an other side of the block this block could be the genesis block it could be the case that there is absolutely no block that these two chains haven't got apart from this let's reduce soon that all of our chains they have the same genesis block and therefore this block is both defined let's define this other round R which lies in between R1 and R2 let's define it in the following this is the first round bigger equal R1 where an honest party has to change C2 prime that diverges from if you prove C1 in K blocks from C1 you're not going to be in a perfect of C2 prime this round is well defined the reason it's well defined is that R2 party P2 with chain C2 is an example that fits that definition of course it may not be the first one so there might be many other rounds receding round R2 that have the same property exactly no there's one and there's one that's going to happen between R1 and R2 let's call this so now we're going to consider the sequence of rounds starting from R star up to R this sequence of rounds are going to have a special function for us in this security argument we would like to be able to argue that this period from R star to R is going to be sufficiently long for us to apply elements of arguments that we had when we described our typical executions for this reason we have to assume that from R star to R is going to be a sufficiently long time that has passed this will be ensured by stating that K is omega of capital where capital is the security part in this case how many blocks you're proving so this will also imply with a relatively easy argument that there's some time that has passed from R star to R certainly because certain number of blocks have been produced in the sequence in this in this part now let's go and have a look at what happens around R minus 1 which is the rounds immediately receding round R round R is the special round where an honest party gets that chain C to prime which diverges from C1 just a little bit think about it intuitively what happened here let's say before round R it could have been the case that all honest parties were happily living in chain C1 and there was no divergence it could have been the case not necessarily so at round R minus 1 things were let's say not divergent and then something happens from round R minus 1 to round R and it diverges for at least one of the honest parties it could be the case for all of them but there's at least one honest party so at round R minus 1 all parties have this chain at round R minus 1 let's say party PI has such a chain and all of them are consistent perhaps extending chain C1 so C1 through K will be a prefix of the chains that the parties have in this round nevertheless at the end of the round R minus 1 there is a chain C to prime that is transmitted for which we know the following it violates the prefix condition so C1 through K there's not a prefix of C to prime anymore that's by definition the chain C to prime cannot be a short chain it has to be at least as long as C1 to see that observe that what happens is that at round R at least one honest party will accept chain C to prime and we know that at round R which is at least R1 there is an honest party that possesses chain C1 so since there is an honest party that possesses C1 at round R1 at round R it means that at least one honest party PI 1 for example has accepted a chain so it cannot be the case that another chain accepted by another party by another honest party at this round is going to be shorter than that because parties do choose chains according to how long they are and they don't adopt chains for that short so this is a key point this chain C to prime which is accepted by an honest party at round R it's at least as long as C1 so let's look at the situation that we have now we have this round R where this chain C to prime that diverges from C1 has been introduced and we have this at round R-1 chain C1 and perhaps other chains that fork from it here at the end these are all where the honest parties are but then suddenly at round R one honest party moves from this set of chains to here introducing a fork by the way, you observe that C2 prime may or may not be equal to the chain C2 which is the one I postulated in the beginning but this doesn't help to secure the argument the existence of C2 prime is derived from the fact that C2 exists but it doesn't have to be the same chain so it could be very well that this chain C2 that C2 ended up with at round R2 is a different chain than C2 prime this is not really of concern what is of concern is that both C2 and C2 prime diverge from C1 even more than K blocks so let's examine this sequence of rounds which starts at round R star or let's say R star plus 1 in the next round after that and goes up to round R minus 1 and we will try now to measure how many blocks are present in these rounds where we produced let's take a uniquely successful round among those here is the first key observation a key level that can be derived from the way the protocol works if there is a block that is created in a uniquely successful round at a position M in the box in a set position I mean this is the end block of that chain then no other honest player will ever mind at position M in any block let me convince you this is the case in a uniquely successful round we have an interesting situation where an honest party has a chain which is M minus 1 blocks because the uniquely successful round will produce a property of position and an honest party found a block just one honest party was successful otherwise uniquely successful so all the other parties failed to extend their chains in the better way where they are honest party found the moment the party finds a block in a round it transmits it so it will become available to all the other parties in the next round so what is going to happen the parties that are behind let's say with change of length M minus 1 or even shorter will see the chain with M blocks and will adopt whereas the parties that are already ahead who knows because perhaps of adversarial interference with their blockchain they will just ignore it but no matter what in the next round nobody is going to be mining a position M everybody is going to be moving ahead so if a block is created maybe successful round in the position M in the blockchain player will mine that position so this creates an interesting challenge for the artist and it's like a core argument for this security every uniquely successful round in the sequence of round S creates a block that should be matched by a block of the artist so specifically if there is a uniquely successful round here that creates a block for the adversary to maintain the form condition it has to produce a match that block that is produced by an artist because there is no way at position M in these two chains to have two blocks from honest practice successful rounds produce blocks that the adversary has to match if he is to hope to maintain the form so this creates a condition and creates a challenge for the artist that he has to at least at the rate of rounds that are uniquely successful so this is the first key observation for this example so how old can these blocks that's the next critical level because it could be decades, let's say that the adversary at the beginning of time makes a big bud of blocks and he keeps them and sprinkles them at some particular region that are uniquely successful rounds and it is of interest to the adversary and matches them so the second key observation of this proof is that these adversarial blocks should be created within the period S of rounds why is that? going back and remind you the definition of S was from R star to R there was a reason I extended back and reached to the first honest block a time R star what is the value of the first honest block in this sequence that this honest block has entropy that was introduced to the proper goal by an honest party therefore and in the Moracala assumption due to the typicality of the execution we said no predictions can happen therefore any proofs of work produced by the adversary preceding round R star will not be useful because the adversary cannot hope to be lucky and predict which was produced by honest parties so this is the second critical observation that the adversarial blocks that are going to be used to match the uniquely successful rounds should be produced within the period of rounds S not different so these are the two critical limits that make this security argument work now as you've laid it out we're ready to do the final step of the proof because matching should take place Z of S should be bigger than you call YS because remember Z is the number of blocks produced by the adversary in the sequence of rounds S and Y is the number of uniquely successful rounds each one of them produces the block and the adversary has to match them all within that sequence of rounds S we have that we selected S to be at least omega of kappa because there is at least K blocks produced in the sequence of rounds S and K was selected to be omega of kappa and therefore typicality and what typicality tells us is that we have a lower bound for YS and an upper bound for ZS which we're now going to combine with this inequality essentially what's happening is that this is going to be a violation we don't expect like this to happen in a typical execution we call that the expectation of Y is at least that and that expectation of X the expectation of Z is that we have this requirement regarding the number of honest parties and the number of malicious parties by doing this substitution this is the expectation of Z 2 times 1 plus epsilon this is here this is Z of S which is bigger than Y of S Y of S is bigger than 1 minus epsilon the expectation of Y of S which we can bound from here and there finally this produces this inequality which if you do the calculation will be an upper bound for N minus B over here which has terms that have to do only with two things the error of typicality which is the one also that is related to eternal bound argument and 1 minus F where F remember is the probability that one round is successful so N minus T over T should be less than that while we have postulated that N minus T over T is bigger than that therefore by choosing delta so in a way that 1 plus delta is bigger than that to achieve that delta should be this will be applied by choosing delta to be bigger than 2 epsilon plus B and this is a how much more should be the ratio of the honest parties versus the other side Observe that F is present here and this is a this suggests that we would like to choose F as small as possible if we are to choose delta as small as possible by this we won't delta to be close to 0 so that we are close to 50% and thus F should be close to 0 however that would only be for common prefixes as you would see like this would ask us to choose F not useful and therefore finding the right F is going to be an interplay between resistance and logarithms at the end so the proof here has been found to be very right to take delta as small as possible how does that take itself to be mining about it? it's actually orthogonal if you will so self is mining will manifest itself in the next in the next proof because self is mining itself does not concern so the divergence of the wrong sense but rather the quality of the context self is mining seems to give the a natural way to fortify or determine wrong no it's not about forking the chain in self is mining it's about the forking that happens in self is mining it's not done as an attempt to make an honest part this split but it is in an attempt to make honest part adopt adversarial blocks so it will be very clear when I come to the next proof so this is the end of this proof and establishes common practice the chain work consider a chain an L consecutive blocks from that chain we are going to establish chain quality and chain quality as you remember it has a coefficient chain quality coefficient new which is an upper bound on the portion of the blocks that the adversary can insert in an honest part of the chain so this is new and we are going to prove that new is 1 over lambda where lambda is an arbitrary parameter that satisfies this observe now that I strongly think the majority of the honest part of this lambda coefficient would say some value closer than bigger than 1 but as close to 1 is possible here basically this says and you can also detect a little bit the weakness of this chain quality theory the fact that the number of blocks that are controlled by the adversary are going to be approximating 100% because the adversary is going to be coming close to majority so basically the closure of the adversary comes to majority is going to be absolutely noting because they are going to be controlling basically all the blocks still we are going to show that this little chain quality is sufficient for many applications so this answers the concern about this parameter in the previous question so let's go by contradiction consider a sequence of blocks VUBB which are of that length and this exists in a chain of blocks let's augment that sequence of blocks let's augment it to a slightly larger sequence so that BU prime was to be used by the mispartheid round let's say R1 and a mispartheid round R2 so that's a slight extension of the original block segment which is going to be convenient for us because we want to unfold that chain to blocks that were actually possessed by mispartheids observe that such extension is well defined for example BU prime if all are the same it can be the same as the block whereas BU prime can be the current head of the chain because that's the segment of a chain that is possessed by mispartheid let's X call the number of blocks that are produced by the honest parties and let's say for the sake of contradiction let's say if X is less than 1 minus mu times down so in other words the adversary controls more than a portion mu of that original segment with a similar argument as we did before we can argue again here that because of the typicality of the execution only n blocks of that segment of the chain were produced in this sequence of rounds and this again invokes the fact that no predictions happen in the typical execution and therefore the blocks that we have here in that sequence of rounds they are going to be blocks produced during the period of rounds R1 and R2 the second lemma which is relevant is that because the way we chose S the length of the expanded segment capital L is bigger equal to X over S why is that the case let's try to convince you about that X of S is the number of successful rounds so this is the minimum rate if you want that the honest parties change grow no matter what is the strategy of the adversary every unique success every successful round X will add a block to the honest parties chain no matter what the adversary does therefore this does not guarantee that the block is going to be honest might be adversarial but the point is that if you have a successful round the parties, the honest parties are going to be rushing forward without the adversary enabling them to hold them back the only thing the adversary can do possibly is issue another block to own creation but the honest parties change will bring and that's a key point so this capital L is going to be bigger equal X because otherwise no honest party would have accepted BB prime and we are sure that BB prime goes except everybody so with this key observation and our assumption about X we can find Z in X so Z of S which is the number of blocks produced by the adversary should be at least L minus X because this is the number of blocks of that segment you study and these are X the blocks produced by the honest party L minus X by the assumption is bigger than new times L that's because X are not that many and by lemma number two here this is bigger than X what is relevant for us is Z bigger than new times so basically Z could be all the blocks the adversary produced because the point here is that because of typicality all the blocks that we are currently considering they are created within that period of time so therefore the adversary is under stress let's say too much that number because he has to insert all those blocks there does that make sense so basically we have no restriction about where the adversary is mining who will be mining anyway and if he's smart enough he will be mining in sensible places but ultimately here this is a counting argument it's not actually an argument that says where did the adversary mine but the shield volume of the blocks that the adversary has produced an argument has to be at least that much which in turn has to be at least that much that's good enough because now we can turn to a similar argument like the one that we used in the common prefixes in the end of the common prefixes involved typicality and the closeness to their needs of these random variables and deriving the traditional so we have that Z is bigger than Newton's X and the number of rounds S is at least omega kappa because of typicality we have these two conditions X is bigger than one minus X is one the knee Z is less than one plus X is one the knee we can again take these two and use them in conjunction to this inequality at the same time we're going to recall the definition of the means that's the mean of X lower bound for the mean of Y and Z as well as the condition we have doing the substitution I'm starting from this side one was actually all times the mean of Z which I have it from here I'm substituting this and I'm obtaining this side so that's equal to this in terms that's bigger than that that's bigger than this and this X is bigger than this now doing the substitution for the expectation of X this is actually and this will give this expression so using that inequality I can now obtain an upper bound N minus 3 over 2 and you will see that by doing the calculation I've designed everything so that I will be getting a contradiction as well as one plus delta is bigger than that which can be implied by choosing delta to be 2 over X please don't be deceived by the fact that this is now better than the common prefix because I've also introduced now from my convenience this multiplicity factor lambda which actually is a value bigger than that and it's actually making things it introduces more tension you may ask so this is a completion of that proof for change quality coefficient 1 over lambda you might ask you will observe that this is not very nice I mean in fact it's not fair that this provision was closer to t over N minus 3 or closer to or I should say t over N like proportional to how many adversarial artists we have in the system unfortunately it is not then you can ask is that an artifact of this proof like is this proof tight and we can improve it but first let me know there is a strategy so-called block withholding strategy I came to a selfish mining strategy which actually matches this product and therefore there's nothing we can do at least within this model where the adversary is rushing and controls the network there's nothing we can do to improve that proficiency and we have to live and we have to see what we can get with that rather small table just to understand how small it is just imagine that as the adversary approaches 50% in hashing power the ratio of blocks from the adversary will be overwhelming in any sufficiently long segment of the honest artist so the honest artist will be barely managing just the few blocks like as the as the chain of tasks now I have to put some other way selfish mining only things that works above 130 well we are in a second where rushing adversary an adversarial network conditions so basically it's a stronger model and therefore we have like a better time which is actually simpler than so much mining we articulated that attack in the decay of the so it's you can see there the optimality of this quality so my only chain growth so let's consider a chain of honest parking and I'm going to show to you that the chain growth coefficient is 1-2 tan z so so basically this is how fast the chain will grow and as you see now F here makes its second appearance and now it makes it from the flip side we wanted F to be very small for common graphics to be closer to honest maturity but here you would see that we would like F to be not so small because if we maybe let's say very close to zero then the speed the chain growth coefficient is going to be basically zero and we will not be growing as you would see we want the chain to grow because based on chain growth we are going to prove 5-0 resistance of lightness direct proof observe that any successful route the chain of honest parking grows by a block I already argued that in level 1 in chain quality in chain quality and it comes directly from the way that the pin point backbone is defined so if you take a number of rounds let's say S the expectation of the successful rounds are that many and this is like the minimum rate that the honest partners change will grow observe that it may grow faster because let's say the adversary decides to play honestly and will participate in this but the worst thing they can do against chain growth is like not doing anything that case will minimize the growth of the chains now due to the chain quality again X of S has this relation to its mean and therefore in a period of S rounds we will obtain that many blocks which is 1-X0 times S and this is a direct proof that the chain growth coefficient is 1-X0 times S so I left this easiest proof as the last one because now with these three properties of the data structure common graphics and chain growth we are ready to prove our objective which is the victim backbone implements a robust transaction the assumptions that we are going to use is that there are typical executions with error epsilon we have this delta value which is 2 epsilon plus F at least that being that controls this 1 plus delta coefficient and also we have this lambda which is close to 1 and this value F is p times minus 3 and we are going to assume that this value F is somewhere between 0 and 1 not too small, not too big so that is the assumption for the final theorem to go so first we are going to prove persistence we are going to go by contradiction soon persistence fails let's equality what does that mean there is a transaction that is reporting a stable at by an honest party at round R1 this is what we placed here so that is like an honest party P1 reporting a transaction a stable at round R1 then another round R2 maybe bigger than R1 the same transactions report a stable by an honest player P2 maybe the same as P1 but not in a different position so given this condition what happens is that the change of C1 the change C1 into the two parties should satisfy that if I prove K blocks from C1 the transaction Tx will be present that happens around R1 whereas if I prove K blocks from C2 transaction Tx is going to be present here but it is going to be in a different position this means that C1 shop K reporting K blocks from C1 is not it is not in a very similar situation that is because transactions cannot be repeated inside so this is used here I have mentioned that these transactions have to be unique so you cannot accept the same transaction twice in the letter and that is needed here to finally argue that there is a direct violation of the common prefix property therefore resistance was reduced in this very simple simple slide so like this asks the following if you try to insert a transaction in the letter for a sufficient number of rounds if you wait this number of rounds for you you will see that everyone is going to call its name so it is going to be adopted by everyone's chain every party in their blockchain is going to be sufficiently varied with K blocks so let's examine what happens at the next round after we just made it for you guys because of chain growth and the fact that we waited for you blocks there is going to be at least tau times u blocks in each honest party chain where tau here is the chain growth coefficient furthermore by chain quality 1 minus 1 over lambda where 1 over lambda here is the coefficient of chain quality will be due to an honest party given the choice of u you can do the substitution and you can bound by one this value which means that after u rounds there is going to be at least one block coming from an honest party in the chain of everyone and what's good now is that that's enough because we are attentive to insert that transaction for you can save the rounds and at least one block was produced by an honest party and therefore that honest party had this transaction in its full of transactions due to the block chain and therefore that transaction made it some individual project so this is the final proof that the Bitcoin background protocol implements a robust transaction that you can observe parameters here and especially this parameter for Linus has f in the denominator and therefore like the Putsus f in this wall we are going to have to wait longer for the network to hold the transaction at the same time f is involved here and delta should be bigger than it so we would like it to be small so that we are not too far away from honest majority so this creates a natural tension between the transaction processing speed and the security which can now based on all this modeling work can be studied in a formal way alright so with this let me count now the applications and obviously are the first thing you can think about this was the first thing that we tried to look at when we were working with this in the TKL paper is to understand how does this relate to consensus how do you use that protocol how do you use this blockchain protocol to solve consensus this was the first natural question and it sounded like it should be easy but it turns out it's not it's not easy actually it's not safe to solve consensus our use of this protocol and that's a little surprising to me when I was starting to understand this problem better on confusion about what is consensus and what is the blockchain taking there's all of mix mixups about terminology when people talk about this problem nevertheless keeping in mind that consensus is a well defined problem with a long history in computer science literature and even though it has many versions many variants of it this variant is the one that people identify with so we really defined this so how we can solve it so obviously the idea is can we apply the backbone protocol like remember we had this predicate the validity predicate, the input function and the read function and maybe we can define the blockchain in a certain way so that it can be used as a way to solve consensus so basically here is the reduction of blockchain the big point backbone by specifying only the validity predicate the input function and the read function and then like solve consensus using the properties of persistence and darkness hopefully that will cut the piece actually Nakamoto himself was aware of this and he was aware that consensus is a different problem or at least he didn't call it consensus but he referred to as Byzantine Generals actually it's a closed variant where usually people with predicate consensus is the Byzantine Agreement problem but he was aware of it and he actually tried to argue that you can use the blockchain to solve consensus or Byzantine Generals problems in a direct way so that was actually the post that not many people know about it's coming from 13 November 2008 and this is in this post Nakamoto describes the way he used the blockchain to prove to derive consensus because apparently it was also felt at the time that there must be a relationship between the two problems and it should be the case that the blockchain solves consensus itself so you can go to this link and read exactly his description which nicely fits to our frame so basically the protocol described which I will call the commodity consensus protocol and I would like you in your mind to differentiate from the blockchain because this is the protocol that solves consensus in the classical definition of what consensus means and not blockchain his protocol is something that can be described easily with this machinery introduced to you in this work so basically and here I'm giving the full description but I'll just describe it to you in a high level it's quite simple we have to define in order to use the backbone protocol we have to define V, I and R so we have to define what does it mean for change to be blocked we have to define how do we insert inputs to the blockchain and how do we interpret the blockchain which is R so in this Nakamono consensus protocol I can describe it in the backbone model as follows the predicate V access only blockchains which are very simple they are completely empty of anything else except a single input value which is either 0 or 1 because we remember we are solving consensus in the binary case the inputs of the parties are 0 so basically what happens is that the parties try to make blockchains that have their input inside and in every block they put their own input so if you have 0 you try to make a blockchain which is 0, 0, 0 you have a 1 you try to make a blockchain which is 1, 1, 1 that's all so how does the protocol advance what happens is that you run this protocol and say once you reach a certain number of rounds all parties terminate look at their blockchain and then just the output the single bit that they see there what happens is that they will all agree because of common preference and all blockchains or valid blockchains have either all 0 or all 1 so you can easily see that persistence here common preference if you want as the two protocols are very close to each other would apply to you unfortunately this protocol will not give you validity with hyperbole this is a deficiency of that the reason is that very simple suppose all parties start with 8.0 and the adversary is likely to be the first one to use it puts 1 inside there then all the honest parties follow the back of the protocol and switch to that chain and then continue to write from 0 and then switch to 1 and this happens with non-invisible protocol so this is not an overwhelming property and therefore we call this consensual protocol but not part of it because consensual protocol should provide you all the properties with overwhelming property so that was a bit of a disappointment we thought that this should be easy as doing so so we will try to follow it like a different approach which provides a consensual protocol that works for up to one third once I have described you this protocol again very simple to describe by defining v, r and i the three functions that determine what should be inside the blockchain how do we determine the input to the blockchain and how do I interpret this protocol which is the first protocol from the GKL in the GKL paper works like this it's very similar to the proposal of Nakamonov but now you don't change you don't change the input you try to insert inside the blockchain you say I started with an input let's say 0 and every block I produce will have 0 maybe now the chain has both 0s and 1s it's a mixture but I'm going to keep trying I'm going to keep insisting on mining whenever I get a block I put a 0 if mine goes 0 or I put a 1 if it goes 0 now this will create blockchains which are they have both 0s and 1s in their blocks so how do we get a grid the protocol will work in the following way we're going to continue expanding the blockchain in this moment we're going to stop now we would like to invoke the more prefix we're going to chop K blocks from the end of the chain we're going to look at the initial prefix of our chain and then we have both 0s and 1s in that initial prefix we're going to output the majority of them will this work well it will work as long as the honest parties they have the majority of the blocks unfortunately we know this is not true in general that's because of chain quality chain quality does not guarantee that the honest parties will produce the majority of the blocks inside the blockchain and it could be the case that the honest parties are started with 0 but they end up with a chain that has a big common prefix and full of 1s and therefore running majority of them will switch their input this 0 to 1 and such an attack is possible because of block there is a way to launch that is this protocol useless though no there is a way to make this protocol work and chain quality again gives us the answer if you look at the bound that we have which is 1 over lambda it says the following what if lambda is 2 basically this means that the number of honest parties is twice as many as the bound parties so we are in a situation we are 1 third per side of bound then chain quality is that the majority of the blocks in any segment are going to be coming from honest parties and therefore if you run majority at the end of this you are going to get the correct bit so this was this came to us with a bit of mixed feelings we had we have shown that you can use the blockchain protocol to get consensus but it was 1 third and somehow this was diverging from understanding that we should get like an honest majority because after all we have shown that it works for honest majority at least with respect to the transaction led to it and in fact we went to this crime fully how you can use the background to implement an abstract transaction led maybe not necessarily one not necessarily one that is a bit point but any transaction led and there when you have transactions that are based on mutual signatures there is certain care you have to apply when you prove persistence and lightness specifically about lightness which now the young conditions which are mentioned already to you in the beginning of this lecture we also need digital signature security to prove lightness the reason is rather simple if digital signature security fails the adversary can make a forgery of one of your transactions and then this can hurt the lightness guarantee and persistence so with this we have a complete treatment at the abstract level that the battle protocol implementing a transaction led which satisfies resistance and lightness for sort of any type of transaction that is you would want at least those that are useful in all the cases that we use possible now can you go back to the slide how do you prove agreement here I am worried that the blockchain maybe one more and then by going khat you may get the same right so the point here is that you would argue agreement as follows first of all you would like the chain to grow a lot so this has to extend it up to at least let's say 2k where k is the pronoun then by chopping k blocks you guarantee that the honest part is they have k blocks at the beginning and by common prefix is the same sequence oh I didn't feel like you are taking the majority of the only the first k that's essential otherwise you would run into this so first you chop k blocks then you take majority and then by chain quality you guarantee that majority is there so we were poised to find a way to argue consensus based on the blockchain but it was clear that something else was needed the reason was that chain quality was not allowing us to create sufficiently many blocks so all these like majority arguments that we could take it came like something that you might imagine as a natural why don't we use proof of work also for the transactions themselves like the things that you would serve in the future after all proof of work is what we can use to reflect honest majority in the messages that you exchange in such a system so instead of using proof of work to construct the blockchain itself let's also use proof of work for the transactions themselves so this suggested like the following protocol which looks similar to the previous protocol that we designed but now it's enhanced it's the transactions which basically contain your input are not just your input but proof of work with respect to your input then we use the same idea pretty much once the blockchain is long enough the parties will prune the last cables and output the majority of the unique values that are drawn from the set of transactions in the lecture so now because of proof of work we are hoping to argue that the honest parties will not be hindered by the fact that take all of this law even a single law as we have proven is enough for all the honest transactions to be inserted assuming of course we don't have bounds on how many transactions you include in one go because if you have of course then that will be an initial but you can still calibrate this by allowing the protocol to run sufficient so we are very excited when this realization came that actually we can use proof of work in two ways but then immediately realize that there is a big pitfall here now proofs of work are used for two different tasks how do we assure that honest majority is preserved involved we can not just naively say that the honest parties will spend half of their queries for one task and the other half for the other and of course the honest parties might do that because they are honest after all but the adversary may not and of course we will not be able to do the proof of work in the way I am this was actually at the same time quite interesting it's one of these composition problems that frequently happen in cryptography when you try to take protocols that you understand your security properties individually and you try to bring them together on the one side we have the blockchain protocol which we have completely understood in terms of security properties and then we have like a simple protocol creating proof of work inputs and transmitting them so these were two protocols that in isolation worked for honest majority so the blockchain protocol worked for honest majority we got persistence and blindness and that was enough for our proof the other protocol also because it was honest majority the majority of proof of work inputs would be originating from the honest parties so in isolation both protocols worked well and if somehow we could bring them together and have their properties preserved we would have our consensus protocol based on the blockchain that worked with honest majority and unfortunately it was a lot of this how to do it we needed some way to compose these two protocols these two proof of work protocols so this came to this idea in the GKL paper that minus this proposition we called it two for one pass and it's like it's like an idea how you can get two proof of work for the price of the blockchain let me try to explain this concept which essentially enables the parallel composition of two proof of work based protocols one preserving their security protocols so imagine that you have two proof of work style protocols which are like Bitcoin style they have some AX value which is produced by hash function Z maybe some previous hash with a line and an X value of inputs and then you have this proof of work which is less than the target so that's the yellow protocol and then the green protocol does the same and you would like honest parties to run these protocols in part so they have like two threads and you're going to be doing this at the same time so for example in the I'm presenting this generally but you can easily apply it in the previous case so in the previous case you can think like the yellow protocol is the so let's say the green protocol is the one that produces proof of work for the input value that you have so then when you get some SX-CTR you will do this verification and you accept it if that's the case similarly you verify the application that's verification if you just take two threads and run them together it won't work so the security cannot be happy but it's not secure so here is our idea two for one proof of work this is a possibility idea let's say limited composability for two protocols so what you do is the following so you can think of this as the yellow green protocol because now this is a non-black box like a composition you have to meld the two protocols so the protocol works like this you are getting this hash value as before from the yellow and the green protocol and now you create a single W which is instead of this value instead of that value you get a single W value which is the hash of both the yellow and the green H value and now you do the following if this value W is less than T you call this a proof of work solution for the yellow and if this value you turn it around you reverse the string and you find that it is less than T prime you call this a proof of work for the green and now you have to do verification in some sort of combined way because now when you find a proof of work every kind of proof of work for the yellow protocol will have some green protocol artifacts and the proof of work for the green protocol will have some yellow protocol artifacts that will also this does not affect verification the key point is that by doing this trick you attend to disassociate the proof of work operation and essentially try to get two rules of work for the price of work observe that the number of queries that the honest party performs will still be the same if you do not divide Q over 2 queries for yellow and Q over 2 for green you do Q green yellow queries and you will count them as solutions to yellow or the green protocol depending on whether the hash value is going to be too small or too big what we need to show for the composition to work is that these events are independent of each other because if they are not independent then the composition will work luckily we are doing an analysis in the framework also therefore as long as the T and T prime is sufficiently properly selected we can do we can get this level so finding the power solution for either side the power protocol is going to be the independent event and that should be a suitable choice for the T and T prime for example like power solution so by no means this is the only way you can do this and it's kind of an interesting question whether you can compose more protocols together how far you can go doing this limited composability step so here is the TKL process protocol our final version so parties will mine for some work as in the bitcode back mode they will then mine for some work for its input as well so they create a block chain and then they insert their input repetitively solving fresh proofs of work so every time they get a forward for the input they broadcast it and at the same time they do the block chain protocol using these two for one after the block chain grows sufficiently they chop the last T blocks and now they return the majority among the unique inputs in their common projects now based on the fact that now all the inputs come for the work it is guaranteed that there's going to be an honest majority and now only using this idea we can prove or make sure that the TKL protocol using the two for one two for work idea finally our reduction from the properties of common prefixes of resistance and the fact that even the minus 2 of 10 quality we have it's going to be sufficient for obtaining validity so I should say this had an interesting implication to fairness because what happens is that each set of party inputs is fairly representative in terms of proportionality in the block chain and this is kind of interesting because this is not the case for the Bitcoin protocol because of the block withholding of this mining details so this approach was followed by a fashion shield we observed that actually if you take this protocol and instead of like doing consensus you actually go back you can actually get a blockchain protocol where the number of blocks essentially blocks that are produced by honest parties are proportional to their asset where here the notion of the block has to be redefined because it's this proof of work based input this proof of work based input in the consensus protocol I presented was given the money carrier fruit in the fashion and they argued that in the same way that we are here that we can get 50% consensus they can get a blockchain where the number of blocks originating from honest parties is proportional to their asset and this has positive implications in the incentive compatibility of a blockchain this also is a good example that shows how work that is done in the standard typographic model can also have very good implications for arguing about the rationality so this brings me to basically the end of this presentation with respect to the DKL paper which established the the fact that the substructure the Bitcoin backbone protocol is a project and I'm going to spend the last 15 minutes I believe that I have for this presentation telling you about next steps most importantly the way we defined the backbone protocol in the DKL paper was restricting in many ways being that the number of parties is static it's always the same clearly this is not the case in the Bitcoin protocol the Bitcoin backbone was quite limited in this sense it was referring to the protocol that makes sense only in this setting where the number of parties remains fixed in the beginning of the protocol um so when we finished that work we found that one of the most critical open questions was actually to understand the dynamic nature of the protocol and it's proved to be much harder than before or at least I got kind of expecting that one of these took us a long time on two years to get to a new paper just a few months before this date where we expand the backbone protocol in the case of the dynamic where the number of parties is evolving with the execution of the protocol and now the protocol is modified and tries to adjust to adjust itself to the fact that the parties do change so the plan remains the same though we would like to prove the basic properties of the protocol and then show that the protocol still remains a robust transactional in the sense of persistence and line despite the fact now that the environment can fluctuate than other parties so I'm going to tell you a little bit about this result which is in order of magnitude harder than the analysis of the bottom paper I presented and given that I only have very limited time I'm only going to hint about the techniques that we're using in this paper nevertheless the paper is available in the paper and you can read it now that you have all the material in the first decade of paper or you'll be able to forward this and what's going on using also some of the points that I would present so what happens in the dynamic execution the environment creates a disabled spark so while this is something that we can easily incorporate in our model this is more challenging in the case of lots of problems in our general model of the bottom paper because we have always to control the number of adversarial parties and the number of bonus parties so we have to introduce some terminology we have to always count how many parties are ready and mining we have to make assumptions about in each round how many parties are ready and mining and how many parties are controlled by the adversary the fact now is that the adversary and maybe in conjunction with the environment is introducing and removing parties and of course we cannot hope to prove the security of the problem or the situation where somehow the number of parties that are honest and mining are surpassed by the others so for this we introduce this notion of gamma s respective environment which basically says that in a sequence of rounds which is smaller than this the maximum number of parties does not exceed the factor of gamma of the minimum number of parties in that sequence so this is a restriction on how much the number of parties can fluctuate over the sequence of rounds s observe that gamma is still a multiplicative still a multiplicative factor here and therefore it can accommodate an exponential problem so basically every s rounds we can still get gamma more parties multiplicatively and therefore this gamma s respective environment allows an exponential problem to the number of parties in short intervals of time of course because we're still going to have to restrict the total execution to be a problem so of course the protocol now has to change so our rendering of the backbone protocol in this setting is the backbone protocol with change of variable because now we have to take it to account again from the bitcoin implementation that change will have a different difficulty and also they have timestamps expecting the way this difficulty is calculated so parties that don't change with the highest difficulty now and not the longest change and without loss of generality and even though the actual bitcoin implementation does something slightly different without loss of generality we can define the difficulty of the chain with just the summation over 1 over t of all the blocks that you find inside of the chain so basically this says that the smaller the target is the more difficult the thing is that then we can also define a similar value like f the one we defined before and that's a probability that this one of the n-part is finding the forward with target e to the inner part remember from today f should not be too small of a t and that's something that's going to be happening now nevertheless what's very interesting now is that t is not fixed and it's something that depends on you, each one of us actually parties themselves may not even agree what's the right value of t and they may be using different values of t even in the same route so in the end the way we argue is that the protocol in the way that we're going to describe it what it does is that it tries to redefine the value of t as the protocol advances so that the value f remains a nice comparable value between 0 and 1 not too small not too big so basically you can think of this in the bottom protocol with change of point of difficulty as a protocol that tries to recalibrate t so that this critical value f which was so important in the properties of resistance and blindness as you've seen in the analysis in paper remains within reasonable remains within reasonable range f is quite important and you can see that in a relatively stable way if f becomes too small parties will not do problems because the chain growth will be so so so so so on the other hand if f becomes too large parties may keep colliding over time and you can easily see that an adversary exploiting the fact that controls the network can divide the honest parties in two sets and keep them there by doing an adversarial schedule of the messages that are produced so basically the protocol can be completely broken if f is too small or f is too big so this motivates the fact it motivates let's say analysis it motivates interpretation environment what the proxies protocol should do is recalculate the target to keep the value f close to an initial correct which is if you want is the one that was originally the one coming first decade of paper so basically you can think of the first decade of paper analyzing the security of the first protocol epoch of the protocol grid and then the protocol tries to recalculate to the next epoch doing another instantiation of itself by calculating the correct part so here is a target calculation function which is extracted from the Bitcoin implementation and presented in sufficient precision in a single slot important important aspects here are the following T0 is the initial target the one that let's say we should be scoring and then the knob is the parameter M which is the epoch length in a number of blocks there's going to be a new parameter that's going to be introduced in its variable digital process and determines how many blocks we have in one epoch that are going to be used to define the target of the next finally T is the target in effect so how do we compute targets what happens in the protocol is that it operates in the following way in every epoch the protocol looks at how long it took to complete the last epoch it does this using its local block check as a tanking and then it looks at the block timestamps and looking at the timestamps it determines that the last epoch lasted for the second period of time then it compares this to how much would the epoch have lasted if the number of parties remained the same and using this by this calculation it calculates the effective number of parties not the actual number of parties but the average number of parties that if they were present they would have produced a sequence of n blocks which took delta time to be generated and using that if it sees that the blockchain goes too fast it makes the target smaller if the blockchain goes too slow it makes the target bigger so that it's easier to find and this done in a linear fashion but quite interestingly there is a dampening of the fact that says I'm not going to change it by too much so if I see that it reaches the ceiling which is determined by this the calculation threshold parameter and apologies for using tau again here so if this is that it just stops it there and that's it it's not very clear why Bitcoin does that what you see because you might say if I want to get the maximum sort of recalibration potential I should allow the next target to follow the effective number of parties closely without enforcing any ceilings or lower bounds of that by the way the actual Bitcoin parameterization that tau is 4 and m is 2016 so epos last 2016 blocks it's about 2 weeks in real time and tau is the number 4 so basically you can never go 4 times now well there was some big wisdom like in choosing the target calculation like that in the Bitcoin implementation by Nakamoto and it's a wisdom that is nowhere present in the white paper there's wisdom there in the implementation so the wisdom was in introducing this threshold and the fact that this important came apparent when in paper in 2013 demonstrated if actually you do not have this threshold that controls the target and you allow the target to be calibrated arbitrarily with the way the parties change there is a way to break the problem which in our technology would be basically a common practice in that way it's kind of a very nice center where miners in private mine a chain with time stamps that are in rapid succession simulating the network in conditions where there are too many parties something that based on target calculation it makes the target very hard you might think that's kind of this kind of stupid thing to do the adversary why would the adversary try to make its target hard work after all this does not change the expectation of the adversary's success it just makes it harder for the adversary to produce a target but the observation which is nice is that this increases variance and that's a key point where the honest parties will be advancing like this the adversary will be advancing like this and even more and you can see the anti-concentration argument that you cannot then rule out the probability of error the fact that the adversary will test to a sudden first forward a very nice attack it actually is being negated by the fact that you can put an upper bound a ceiling and a lower bound in the target attack based on function so this is the traditional fact that how the attack didn't have any argument to show that it can withstand this what he observed was that in the full implementation it has the lower bound and the upper bound his attack would not work and that's why he called it a theoretical attack so I think time is running out we won't talk about 5 minutes you can have whatever you want just because 5 minutes so I'm actually not going to go out on me I'm just going to go over there to rush through that but key point here is that but the proof strategy is going to be the same but it's going to be a typicality notion which is going to be much more involved and based on this typicality notion we're going to define a notion of goodness for executions which will determine how close to reality are the targets that parties are using close to reality here will be determined by a notion of an idea of target the one that we would have liked using that we're going to prove that this parallel goodness will enforce sufficiently correct timestamps observe that timestamps in this segment cannot be trusted because they're presarial blocks to contain timestamps which appears diverged from the actual time and using that using this argument we eventually show that because the timestamps are roughly correct the way that the protocol recapitulates itself is also roughly correct and eventually a proof will be derived I'm going to skip that for I just hit a little bit of the difficulty we have even defining what is the typical execution the main problem here is that the variables we're dealing with let's say like this variable q which says the equivalent of y in a previous analysis it's much harder to define the reason of it it's exactly because now it's binding and it's behavior as a random variable depends on the execution itself something that was not true in the previous analysis where we could use like essentially we can think of every attempt to sum a block as a Bernoulli trial so now it's a trial but this depends it's successful ability depends on the execution itself and the outcomes of previous trials this to some of you may suggest that the multi-game analysis is what is needed here and this is exactly what we performed so one of the main theorems showing that all the normal bound executions are typical uses martingale tail bounds to establish a similar theorem as the one that we have argued in the case of typical executions are what happened with overburdened probability in a static case obviously we have no time to go into details so I will skip that I will invite you to find a paper in eprint and see all the details here so are we done with all this? we analyzed the Bitcoin background with in the study and also showed you some case of it in the dynamic analysis but the answer is obviously no this is needed to be done in the case of in the in the direction of understanding the hookup and security properties of proxy problems I will just mention a little bit some of it you hear also in the other talks you heard already and you will hear so we have a rationality and set of compatibility very important topic understanding better setting synchronous and asynchronous behavior whatever I talked about is synchronous setting is very important to do it in the asynchronous setting there is already ongoing work right now by many researchers in the area giving like a definitely non-exhaustive list of signatures as a tribute to all these people that are working very hard to understand these problems security we investigate alternative protocols now we have specific objectives and it's quite timely now that we can ask the question we have the objective and we have a proof that the Bitcoin abstractic suitably meets the objective is the Bitcoin probable like the best way to do that meets the objective or maybe we can design a better more efficient synchronous security are there other alternatives so Rousseau work is one way that we can achieve this type of blockchain consensus and robust transaction led to in this non-authenticated domain that the Bitcoin protocol applies but there might be other ways that can strike an interesting balance between the centralized classical let's say consensus approach where you have a fixed set of servers running protocol and the completely decentralized work where you have absolutely no probably keys or any other infrastructure to track identities there's a very, I would argue there's a very interesting space in between these classical centralized fixed set of servers agreement consensus setting and the Bitcoin blockchain setting it's very interesting to understand this intermediate space there are protocols that we suggest with their proof of stake protocols are a very interesting instance without and there is a very unactive research on this direction including the facility and we do hope that we will be able to understand this much better soon finally you can think of all these protocols as sort of like the underlying infrastructure for doing consensus blockchain level there's very many interesting things that we would like to build on top of it applications of the multi-party setting and this is another very interesting setting still a lot of work is to be done presentation of which that you have already discussed so thank you very much for your attention thank you