 Next up we have a presentation titled Permissionless Consensus to Infinity and Beyond by Patrick Lamby and Manette from the University of Paris. So I'll let you take it from here. To share today, I propose, well, it's already quite late here. I propose going with something slightly lighter in content and beyond what we've seen in terms of topics so far. So during my PhD, I worked a lot at the intersection of distributed consensus as this very traditional in computer science and distributed computing and applied math where they also have different kinds of consensus systems and trying to make those two models talk to each other. And that gave me a lot of idea about, among other things, what blockchain could learn from the systems and my talk today is about that. And also about one of my pet peeves, which is that according to Satoshi Nakamoto, Bitcoin solves the Byzantine general problem or classical consensus or atomic broadcast. And I'm going to try to argue that that's actually not what it does. And I find it very annoying when people talk about this, about it this way on the internet. If you need to take one thing out of this talk, it would be that Bitcoin does not solve the atomic consensus problem, the atomic broadcast problem and probably a small something like an atomic problem that I'm going to present next. One starting point of the theory I'm going to present is the first paper in 1974 called, Reaching the consensus. And what he considers is a group of experts trying to reach agreement on correct opinion on something. And to do this, they exchange and then reflect and then exchange again and then reflect and the way he models that is a very simple mathematical model where they have an initial opinion and then in rounds, they weigh each other's opinions according to how much they trust one another and then that forms a new opinion and you just iterate over that process. So there's an equivalent metric form with a fixed matrix A and then a vector X and X of T is just the matrix A applied to the vector X of T minus one. And A is the stochastic matrix which means that the updates are convex. And the code gives a very simple theorem. That's actually a much older theorem in math that if the matrix A to the power of N has a positive problem with all positive entries, then the opinions asymptotically converge to the same value. And that leads me to define the asymptotic consensus problem which is, I'm going to give you the presentation like we find in traditional computer science. So there's the convergence property with all opinions converging to one final value and then there's the vanishing disagreement property where the distance between any two values two opinions vanishes over time and then there's the validity property where all opinions always say between the minimum and the maximum input opinions. That's the starting point but there's been many other applications using the same kind of model. So there's opinion dynamics as is in the host model but also is very used in everything modeling how to coordinate clocks of robots and collective motion and specific problems while flocking rendezvous coverage protocol. All those are specific problems that this refers to. And then a lot of synchronization problems for clocks and also this is used to model natural systems like cells, especially seismic cells for instance or flux of birds orienting in the sky. That's more of a motion coordination problem but it's a natural one. Overall, anything that pertains to the synchronization of control system falls into that category. And then distributed data aggregation especially in sensor networks it's a model that's used very much and distributed load balancing and distributed signal processing and data science application in a distributed context. And so all of these applications share a similar set of properties. Well, a big set of each time. So they are concerned with low powered agents especially when we look at cells or autonomous nodes in remote networks. One of the aspects of this being low powered is that agents typically do not have personal identities or do not possess global information like the number of nodes or the degree of the network of things like that. So in a way it's another sort of permissionlessness not because new agents can enter the system but because they don't have identities. It's okay to be approximate. Not everyone needs to have the exact same behavior so long as it remains within a range of error that's acceptable. On the other hand, things should go fast. When you're trying to synchronize clocks you want them to be running so that your application can start. And then a consequence of that is that you should be making regular progress and it's more important to be making regular progress towards a goal than to wait and be absolutely sure that you're correct. To discuss extensions to the host model, well, there's been a lot of work in the kids' sense but I'm going to focus on dynamic graphs which is the fact that you have a partial communication graph whereas in the host model all experts communicate with one another and this partial communication graph is subject to changing over time. And also you're going to have dynamic weights, A because maybe you've lost communication and also because you may want to adapt weight over time. And so the general shape of the update is this one. Agent I takes as new value the weighted average of the old values of his neighbors in round T. And so I'm speaking in rounds but these models actually are fairly tolerant of ethnicity. It's just that it's much more comfortable to express them in a synchronous context. We have a few theorems, there's many, many theorems about the systems but a few that are very important is that if you have a system with bidirectional interactions then you're going to get a synthetic consensus whenever the network never goes into a permanent split. So if regardless of how long it takes some agents ends up hearing from another one at all times then eventually you'll read some sort of consensus and that's modulo some small technicalities that I don't want to get into. And then another one is that for non-bedirectional networks if the network has a radius, so that means if some node is able to broadcast in a constant time all the time and that node maybe changes then this agreement also vanishes geometrically. And what this means is you should be thinking of maybe go back to physics, high school physics. So if you've got a circuit with a resistor and a capacitor it's regulated with a first degree linear differential equation. So the solution is an exponential and you see this nice little curvy shape. And so you've got an exponential decrease of the tension at the extremities of the resistor. And maybe some of you are already seeing what I'm going with this. So just to give it a little more picture things a little more clearly, this is a figure from a recent paper of mine. And on the right side you actually see a pathological example where this characteristic time before convergence is very, very long. It's exponential in the number of nodes whereas on the right side it's much faster. But on both sides you see this smooth exponential contraction of this agreement. We all know that it's got three main properties. One is that all nodes should decide. The other one is that if they've decided then the decision should be the same. And then the validity condition is that the decision should belong to the input of the nodes. And I'm not putting any sets here because, well, the specific condition depends on the problem. If you're considering Byzantine problem for instance you maybe don't want the input to be among the input proposed by the malicious nodes. And it's a very hard problem because it's basically undoable if you've got spacing for the systems and if you've got more than a third of Byzantine faults it's not doable either. Just to give you a feeling of what a solution to this problem looks like in Beno's consensus, a randomized consensus algorithm nodes keep trying to find a majority or supermajority for the Byzantine case of similar opinions. And if the third one they just re-broadcast it and then if enough nodes have re-broadcasted this opinion to go above the Byzantine quorum threshold then they decide on the value. Otherwise they propose again random value and try again to go through the protocol. And the one thing I wanted to say about this algorithm is that you see that it is valid and unanimous if decisions happen. And the one thing is decision may actually fail to happen but they happen with probability one and that's the tricky part of the proof in the algorithm. Let's compare this to Bitcoin because in Bitcoin you've got miners who try to expand the chain and how the players, the definition of what the object of consensus is is the longest chain, right? Safety is supposed to come from the fact that if prefix of the chain is very deep then it's hard to reverse. But there's always a non-zero chance of getting it re-written because maybe we're very, very unlucky and an adversarial player even if he doesn't have any overpowering power or computing power just gets very lucky and is able to rewrite the entire chain. So that's very unlikely but non-zero event. More importantly for us, the safety of the agreements meaning the fact that my chain is a prefix of a future chain and so the transactions that are in it are valid depends on the system keeping being online all the time forever. That was Victor's point. So it's a never-ending process. And I just wanted to point out to you the similarities between this safety condition and its specificity. So the one informally explained by Satoshi in his paper that the probability of rebooking a prefix that's K blocks deep is diminishes internationally in K. It's actually very infested in paper because it doesn't say exactly what the conditions are but what it kind of looks like is especially with the fact that blocks are that linearly in time to the chain that if over some periods you've got a harness majority and that your chain remains a valid prefix of the chain during that period then you kind of have this explanation of the potential decrease in the probability of getting your chain rebooked. It kind of looks exactly the same as before as the kind of behavior that you observe in asymptotic systems. So that's what I wanted to say. It's kind of not so atomic broadcast because don't decide one and then stop. They actually elect a value that's going to be the decision for the time being and it can change in the future and the system needs to stay up forever if they want to hope this value to remain correct forever. What it would mean to solve atomic broadcast is not clear because in a permission the system maybe almost that were presented at the time at the beginning left and were replaced. And so what does it mean to select a value among the inputs is not well-defined. And what it does do is over segments of time it hopefully gives some this property of the probability of your prefix being rebooked diminishing with time exponentially. And so the method point of all this is when we're thinking about guarantees provided by systems by blockchain systems maybe we shouldn't be looking at two half problems like BFT but because these are very hard to solve but instead maybe getting solutions from this kind of problems which are much easier to solve and might still be useful for practical application. Thank you.