 Together with Jonathan Chu, Russell Wong, and our other co-organizers, I want to thank everyone for joining us today. This new series will meet monthly, generally on the last Friday of each month, and it's a platform for people in central banks, academia, and elsewhere to get together and discuss policy relevant research related to digital payments, digital currencies, and central banking. Each session will be hosted virtually by a different institutional partner. This host is the BIS, and I will now turn things over to our moderator for today, John Frost. Thanks a lot, Todd, and welcome to everyone. Also from our end, it's a pleasure to be here, and welcome to this inaugural central banking and digital currencies seminar. So we're really excited about this new series, and we're glad to be able to take part on behalf of the Bank of International Settlements. So today we'll hear about new research from Jun Song Shin, Raphael Auer, and Siro Monet on the economics of permission-distributed ledger technology. We'll have 25 minutes for Jun to present, followed by our discussant, Hannah Halibroda, for 10 minutes, and then Q&A. And my own role in this will be to promote Swiss timekeeping and to field all the questions. Just a reminder to all participants to please use the Q&A box, so if you have a question to pose, you can pose it in the Q&A box, and we'll keep those for the Q&A session. Without further ado, I'd like to pass over to Jun who will present the paper. Thank you, John, and thank you, Todd. It's really a great pleasure to pick off this very important initiative. I'd like to present a recent paper with Raphael Auer and Siro Monet, and they're here with me on the panel. It's on permission, the DLT, and the governance of money. I don't think I need to motivate the questions in this audience. The starting point is the idea that money can serve as a substitute for a ledger that records all past transactions. And in fact, Narayana Kochalakota actually had a paper called Money, This Memory. And the idea there was that money can substitute for this theoretical ledger. If you're holding money, it's a sign of good sold in the past or services rendered. And it does almost as well as this theoretical ledger. What's happened is that this fanciful notion of a huge universal ledger has become closer to reality or indeed has really been put into reality, especially with the advent of distributed ledger technology and blockchain and its application in Bitcoin. This is the kind of picture we have in mind. We write the history in a series of blocks, and then we have rules whereby these blocks are updated into a chain and hence blockchain. Now, the fundamental question I want to address, and this is a very high level view, is what are the respective advantages and disadvantages of a centralized versus distributed ledger? I think there is a very strong recognition that distributed ledgers have this robustness that comes from redundancy. And here I don't just mean keeping the same copies of the ledger all over the place. It's more about governance in that it provides the checks and balances that mean that no single party has a monopoly on the truth. That you can provide this check and balance for the whole system. It avoids governance risks and you're not putting all the eggs in one basket. Now said against this, there is also a very large economic cost potentially. And this has to do with what it takes to keep that cooperative or consensus mechanism going. In the case of Bitcoin and the proof of work protocol, there's a special element which has to do with the energy use and the cost of that is well known. But I want to highlight in this presentation something broader, which is to the extent that the decentralized consensus mechanism implies a great deal of coordination. There has to be the proper incentive, the incentive mechanism that's in place to keep that coordination going. And potentially you're going to need to divert a lot of resources to provide the incentives to the validators. So I don't need to tell this audience this distinction between permissionless CLT, which is typified by Bitcoin, where anyone can join and act as a validator. For enterprise use, there is now a number of well-known permissioned versions of the CLT where not everyone can join and stamp themselves as a self-appointed validator. You need to be permitted to come in and be appointed as a validator. And the reason for this has been that although the CLT as a notion is very elegant and the way the Bitcoin protocol works is also theoretically very elegant in that it has this common knowledge element all the way through. You can look at the history of wall pass transactions and that is if you like the governance mechanism that gives it robustness. But it's less than well suited for real life transactions when you have to guard client privacy. So for example, if you're a bank, if you're a commercial bank and you're making payments on behalf of your clients, it's hardly going to be feasible for you to just publish everything on the internet as to what your clients are doing on their account. And there are privacy issues, so there's a variety of issues where you want to be able to combine both the robustness, but also the governance that has to do with maintaining privacy and guarding against illicit finance and anti-money laundering and so on. So what we want is to look at permission distributed ledgers that there is a known and set group of validators and their role is to look at the transactions, do the checks and as a group opine on whether the particular manifestation of the block accords with their own record. And then they're going to provide an opinion as to whether that block is correct or not. And what we're going to be looking at is a voting mechanism as a kind of typical form of this permission ledger where you need a supermajority of the validators to vote yes in order for that new block to be then appended to the existing chain. And here in the real world applications of CBD, wholesale CBD seats that the Bank of Canada that manager of the Singapore and many other central banks have actually implemented and experimented with. There is also this hierarchical notion where sometimes there could be a central notary who has the full view of everything, but then but individual participants will not have the full view and they will only know the transactions on a need to know basis. So what do we do in this paper concretely? So what we want to do is to write down an economic model. There will be some scope for gains from trade. And it's going to be a very standard model that's used in monetary economics where there's a coordination element. And in the absence of informational constraints when all the actions are commonly known to everyone, there are well known implementations of the efficient outcome through standard trigger strategy type of mechanisms. However, the friction in this model is going to be that we don't have that common knowledge. We have to keep track of the actions of all the participants in the economy through a real world ledger. And having this reconciled ledger in this distributed setting that records all past transactions in a truthful way is a public good. It's a very important public good that enables the economic participants to reap the gains from trade. However, there are going to be costs that are incurred by the validators. And the question is, how do you incentivize the validators to do the right thing? And there is firstly a coordination element in that if you have a super majority rule, you can really only gain the benefits of having this public good if enough of the other validators also contribute. So there's going to be a public good contribution game element and we're going to formalize that as a global game. The main result of the paper is then to use this global game set of results as an input into solving for the optimal design of the permission ledger. So how many validators? What super majority threshold do you need in order to get the maximum surplus from this economy? And there are two forces at work and this is the tension that we explore in this paper, which is that on the one hand you need a lot of validators and distributed ledgers in order to guarantee strong governance. Because if you have one validator who can be corrupted, for example, then that's a danger that a distributed ledger can guard against. And in this sense, it's much more expensive to pervert history if you want to bribe a lot of validators than just one. On the other hand, there is an inefficiency cost that comes from having many validators because of the inherent coordination issue here. And because money is a social convention, because money is a coordination device, this is a feature of the monetary system that we cannot ignore. So let me just introduce the model, but this is a very standard model you see in monetary economics where time is discreet, there's a discount factor, and then each period is divided into an early production and a late production period. And there is an efficient outcome where if the match is good, and I'll talk about the noise that might intervene, but typically what would happen is two producers would meet. One is an early producer, one is a late producer. They would like to trade with each other. The early producer should produce for the late producer, and then the late producer then reciprocates by producing, having obtained that first good. However, because this is going to be happening in sequence, unless there is something to stop the second late producer from reneging on some promise, you're going to get autarky as the only equilibrium outcome. And so this is why you need a ledger to keep track of the actions of these producers. And typically in monetary economics, and this paper by Chou and Koppel, this is a very good example, what you need is some kind of threat, and the simplest way of doing it is just to use a trigger strategy, where if you have ever deviated in the past, then you're branded as a bad type, and then no one ever cooperates with you ever. And that's a credible threat that ensures that you always reciprocate in case you're the late producer. So the only twist, and this is going to be the really important twist, is that we're going to be keeping track of the actions of all the participants in the economy through a distributed ledger. So first of all, we're going to be solving the second stage of the game, where we fix the number of validators, we fix the supermajority threshold, solve for the global game, and solve for the allocation subject to these fixed assumptions. And then we go back to the first stage of the game and solve for the optimal outcome by varying the number of validators and the supermajority threshold. So that's more of a mechanism design problem subject to the constraints that are provided by the global game solution. So I think this is a good moment, John, for me to pause in case there are any questions on the model. Absolutely. Any clarifying questions? I think there are two in the Q&A that I see, and one from Fernando Alvarez. So Cyril is typing an answer, Fernando. So each period has early production, late production, but it's a discrete time infinite horizon model. So are there further questions of clarification from the panelists or from anyone else? If not, then I suggest people can continue with the Q&A box. Okay. So let me go on. So the twist in this model is that the validator has to incur a cost, and it's an idiosyncratic cost to validate. So at the very least, what you need to do is to scoop up the various transactions and then see whether the transactions actually match with your own observation. So you have to expend some effort, and we're assuming that that effort is going to be chi i for validator i. Now we're going to have a supermajority consensus protocol such that if more than kappa of the validators provide the exact same validated ledger, then that becomes a valid block. Okay. Now in that case, you're going to be paid this amount z or z. And the f here, f is the noise where some proportion of the late producers actually turn out to be unproductive workers. So they're unproductive. This is just a noise to give the, to provide some kind of, so it's a kind of seed of doubt which will, which we'll use later. But the payoff is this one here, which is that provided that, so, so it's a, it's a binary action game. You either work or you shirk. If you shirk, then your payoff is zero. And we normalize the payoff to shirk to zero pay off the work is going to be this two-part payoff that you see on the screen. If more than this kappa hat threshold actually work, they will get the correct update to the ledger, in which case everyone who has contributed is going to be paid one minus f times z. So one minus f is the surplus to the whole economy and, and, and z or z is the, is the payoff to a particular validator. You have to incur the cost anyway, that's chi i. And so we have a public good contribution game where there's a threshold kappa hat. If more than kappa hat contributes to the public good, then everyone who's provided that good is going to be capturing the, the high payoff. However, if not enough contribute, then your cost is wasted. So you just incur the cost chi i and not obtain the benefit. Now, for ease of the solution, what I'm going to do is just to normalize the payoff and I'm going to define c of i just to be the normalized cost. So it's this, okay, I don't see a pen here, but if you look at the middle of the screen, we're just normalizing the payoff so that the benefit to the public good is normalized to one. And the cost is, is relative to that, is to that benefit. So that we can just parameterize the public good contribution game as the payoff that you see above. And provided that the supermajority threshold kappa hat is reached and exceeded, those validators who have contributed to this validated and reconciled ledger collect the payoff of one. And then they pay this cost c. Now, for future reference, notice that c can be low when z is high. In other words, if you're willing to provide a lot of rents to the validators and provide a lot of surplus to the to the miners, if you like, yeah, who are the validators who update the ledger, then you can bring down the cost. So it's as if there is a, so it's, so there is a distributional element to this problem as well in that, yes, you can make the cost of public good contribution very, very low provided you're willing to give a lot of the surplus to the validators. Now the key here, of course, is, and this is what makes the global game solution work is the fact that we have an element here where there is strategic uncertainty. And we're going to get this through a so-called private values version of a global game. And the idea here is that CI is going to be distributed around some common element theta. And the idiosyncratic component in the cost is uniformly distributed over a very small interval. And it's IID. And we're going to be taking the epsilon to zero and taking the limits. It's a private values global game in the sense that you know your cost, and you're going to be choosing your action based on the cost. It's different from the version where you think of the signal as the truth plus some noise. OK, so that's a public values global game, if you like. Anyway, a well-known result in global games is that if you assume that everyone follows a switching strategy, in other words, that everyone, so there is some threshold C star such that all the people who have a cost below C star are going to work. All the people who have a cost above C star will not work. So suppose that there is such a threshold and suppose that everyone follows this switching strategy. Then if you happen to be exactly at that threshold, then your belief over the proportion of people who are working is actually uniform over the unit interval. So there's radical uncertainty, if you like, over the proportion of people who are actually cooperating in this coordination game. And the intuition here, and there's a proof in the paper, we just put the proof in the paper just for completeness. The intuition here is that this little noise in the cost of validation injects radical uncertainty in the sense that even though the noise is small, what's important here is the order statistic. So the question is, is your cost the highest, the lowest, or something in between? Are you the median? Which percentile are you? And provided that the prior gives no information and the noise is also uniform, it's obvious in that case that my signal tells me nothing about where I lie in the distribution over the cost for the whole population. So I have radical uncertainty over the order statistic. In that sense, I equally like it to be the highest as the lowest, as anywhere in between. And so because you're actually at the threshold and the proportion who cooperate are those people who have a signal lower than me, then that proportion must also be uniform. Okay, so that's the intuition. Now, this is a really very nice lemma for us to solve the problem, because imagine that you have this kind of picture where you think of the proportion who actually verified the ledger distributed on this unit interval. Or we know that if the distribution, so if the realization of capital falls below the threshold, then you get the bad outcome. If the proportion who cooperates is bigger than the threshold, you get the good outcome. Well, this is the payoff function you see over that unit interval. And because we know it's uniform, the distribution at that threshold we can solve for the threshold by looking at the C star, whereby the area of the rectangle A is exactly equal to the area of the rectangle B. So that's a very, very simple solution. And then the second part of the proof just says, well, it turns out that you can solve the game through iterative deletion of strictly dominated strategy. So you start with in the usual way, there are dominance regions, and then you solve a way. And it turns out that this threshold equilibrium is, in fact, not the only equilibrium in threshold strategies, it's the only equilibrium full stop. Because that's the way you solve the game. So what this gives us is a really simple solution method, whereby let's just solve for the game. If the cooperation mass is less than the threshold, you get minus C. That's the first term. If the cooperation threshold is bigger than kappa hat, you get one minus C. That's the second term. Well, you solve for C star and you get this expression one minus kappa hat. In other words, you get this kind of picture. And this is the, if you like, a punchline of the solution. The horizontal axis gives you the kappa hat. That's the super majority of voting threshold. And you're asking what is the marginal type below which people will contribute, above which the validators will not contribute. And it's just this diagonal red line that goes through the diagonal. And so in this, so in the limit when epsilon goes to zero, there is a unique dominant solvable equilibrium where the public good is provided if and only if you lie below that red line. In other words, this is the picture. So you can divide the universe of parameters exactly down the middle like this. If you're in the blue area, you were successful in achieving decentralized consensus in updating the ledger. If you're above the red line, you just don't have enough rents going to the validators in order to successfully update the ledger. So if you find yourself in the white area, like you see here initially, you have to bring the cost of contribution down in order for you to successfully produce that public good. In other words, so in the case of Bitcoin, imagine that the blocks that go to the miners are getting smaller and smaller, which is what's happening. At some point, relative to the cost of being a miner, it's just not going to be worth your while. So even with the fees and everything, of course the way to, so what this says is you need to give more rents to the validators in order to get the whole consensus mechanism going again. So what this does is to give us a very clear boundary as to what's feasible. And so the punchline is, well, suppose we're going to be using a distributed ledger like this and we think of a universal and very realistic mechanism where we're thinking of the validators not simply as automata, but actually self-interested parties, as all the miners are in Bitcoin, and they're doing it because it's in their interest to be miners. So in our model, the validators are also the late producers in this two-step model. In that kind of situation, we have to allow for the possibility of side things. So it is possible that, so if you have a small number of validators, you can go and bribe them and get them to allow you to double spend. Let's just think about that as a possibility. What would actually make the governance of money something that's a result of a robust mechanism? And that's the question we actually addressed. And so let me finish with this final slide, which is the punchline of the paper. So let me introduce Pi as the probability that a bribe is uncovered. Alpha is a parameter that I didn't explain, but it's basically the probability of the match. It's how much surplus you can get from the match. And beta is just a discount factor. This parameter, the delta, is the key. So delta being very high is the case where you care about the distant future. This is a very well-governed society where bribes are actually uncovered. And the optimal minority arrangement depends on delta. And we can have a whole range of different arrangements depending on the configuration of these parameters. So in a high delta society, it actually turns out that a centralized ledger is optimal. And you just make sure that the centralized validator has a very, very high rent. And so it's immune to bribery. So that's the kind of, you know, it's a very stripped-down model. We can think about, you know, it just opens up a whole new set of issues to do with governance here and the politics of all this. But in the simple model, that's the result. As delta goes down, this is when the distributed ledgers now really come in. So when delta falls, permission distributed ledger with a small number of validators is optimal. As delta falls and falls and falls, you need more and more of these validators to come in. Below a certain threshold, you want everyone to be there. You want everyone to be there because you want the maximum governance safeguard. But then if you go below a certain threshold, there's just no economic gain that can be read. So even this very small change from the full information gain is going to just really inject a great deal of richness to the economics of the institutions. So let me finish there, John. Thanks so much, Yun. So we'll now turn to our discussant, Hannah Halliburda. Hannah, you have 10 minutes, please. Thank you very much. It will be quite a feature to do it in 10 minutes. Because thank you very much for inviting me to discuss this paper. I'm very excited and I have lots of things to say. So I already selected only some and I will aim at 10 minutes. So this is clearly a very important and insightful paper. And not only because it's a timely topic, we have had quite already a significant amount of research on permissionless blockchains. And now this is we're turning to analyze, more closely, analyze permission systems. And this paper is the first paper to analyze the incentives and permission system in such a great detail. And this is really, really great. What it allows us to do is to learn about the complexity of economic forces that are involved in incentivizing the validators in a permission system, which is different than permissionless system. But and it also illustrates the general issues that we are up against in analysis of permission, permission blockchains. And I'm going to kind of focus on that. The paper has three parts. There is a model of trade and there is a validation game and then there is the optimal design part. In my remarks, I'm going to only focus on the validation game to kind of limit myself. So the validation game has two stages. The validators are validating first the label of the producer and then they are validating the production that has that has occurred. And in both stages, they need to do, they need to actually verify the information. And then they are voting by sending a message. So they do it twice in two different contexts. Both verification and sending messages is costly. And this means that the validators need to be compensated if we want to keep them participating in the system. They don't have to participate and they are opportunistic or self-interested in the sense that they will only participate if it makes sense for them, if it's profitable. And once we account for the fact that they respond to incentive and they want to be compensated for our verification and sending messages, they are also subject to gripes. And so we need to have a system that is both compensating the validators and also preventing them to take gripes so that the system will be trusted. What makes things worse is that the validators get their payoff. They are getting compensated only if sufficient number of other validators also validates the state and the state is good. So it gives rise to a coordination game where a validator may not find it worthwhile to validate and exert this costly effort if he does not believe that sufficiently large number of other validators will also do the same. So if we believe that other people are not going to validate, we're not going to validate as well, which will be a disaster for the distributed ledger. So this coordination game is solved pretty ingeniously as a global game where there is a variable cost of sending information, endosyncratic cost of sending information. This is private information. It's endosyncratic cost of sending messages, which is private information. And this allows the authors to solve this as a quite complicated global games game. But it yields quite clear solutions. And the solutions are, well, first of all, if we have a larger majority rule, so it was tau in the paper and I noticed that it was k-hat in the presentation. So if we have higher supermajority rule, it is going to limit incentive for bribes, which makes system more secure, but then it also makes it more costly to prevent free writing and provision of the public good. So there is a need to balance those forces. And there is an optimal k-hat to be derived and also we get the result that if we require unanimity, so k-hat would be 100%, and the public good will not be provided at all. And under certain conditions, we also see that having a centralized system where there is only single validator is better. So what the model provides in all 90 pages and very complicated steps, it's a very important insight into how to incentivize the validators to do their job in this particular setting. And it also gives a roadmap of how to do it in other settings. What worries me a little bit is that some of those results are almost assumed because the model fails to tackle the issue of why do we need the validators to do their job at all. So there has been remarked of why do we need distributed ledger because it's going to provide some governance benefits, checks and balances, not only redundancy, but this is not in the model. And this, in fact, is not a critique of justice paper. This is what we see in the nascent literature of economic analysis of permissioned blockchains, is that it is not really clear what we want the distributed system to achieve. And we focus on the cost and incentivizing the distributed system to arise without incorporating the other side. And I'm going to claim that we cannot model only one side without modeling the other. So, you know, on the face of it, we say that the validators allow us to maintain the ledger. So let me actually ask what is the ledger? We have heard about the ledger in the presentation and in the paper, it's the ledger is mentioned many, many times. So what is the ledger in a distributed system? So first of all, we have many validators. And each of the validators in a distributed system is keeping a copy of a ledger. The issue is that those copies may differ. And the point is that it's not their ledger, but how are we going to reconcile and provide consistency for many, many ledgers? And so if we consider that all nodes are equal, and they are opportunistic, they are self-interested. And there is no special node that would be more important than others. Then maintaining consistency of the ledger between the nodes, the nodes is a major challenge. So all permissionless systems have this setup where all nodes are equal. And there is no one more important node. And many of the permissioned systems also have this setup that once permissioned, no node is more important than others. So how do we maintain consistency of the ledger? Well, our instinct is, let's get nodes to vote. And this is the tool that is being being used in many of the papers on permissioned blockchains. Now that we know the identity of the nodes, we can ask them to vote. Here's a problem with that. Who would tally the votes? In the typical voting system, you go to a voting booth, you need to have a whole committee that is going to tally the votes. And do you trust that entity to tally the votes correctly? If you don't have any special node that is more important than others, you can't do just simple voting where everybody votes into one place because then you would need to trust the entity that is collecting the votes. And then this would make this entity a special node, a more trusted node than other nodes. Because otherwise this entity may be also opportunistic and may misreport what the votes of the nodes have said. So what typically is done in this distributed systems where all nodes are equal, we have local voting. So all nodes send their votes to all other nodes and everyone tallys their votes, tallys the votes that they get locally, the votes that they get. And I see whether I'm getting 80% of white or 80% of black. The issue with that is that nodes can send different votes to different recipients. And this is a major problem because my tally of the votes may be different than your tally. I have tallyed all the votes and I see 80% of votes for white, but you can have 80% of votes for black. And I know that you actually may have a different threshold, I mean different votes than I have. So what we need, we need multiple rounds to reconcile and make sure that we update the ledger the same way. So I collect the votes I get with the signature of the nodes that voted. And I'm sending to everybody else, not only my vote, but all the other votes that I have gotten from all the other nodes. And then we kind of iterate until we agree what is going to be the update on the ledger. I had a one minute warning. Yeah. So this is why PVFT Byzantine Fault Tolerant Mechanisms are so much more complicated than just voting. And you can not only take whole courses on this, you can build your whole academic career around how to solve this problem. Okay. So what is happening in this paper? Obviously we do not have all the system, we have simple voting. This you can do when there is one node that is more important. And in the paper there's a notary. So this node keeps the authoritative copy and this is why we can call it the ledger and tallies the validator's votes. Okay. So now do we trust this node to write in the ledger what the nodes have voted for? Well, obviously we need to trust this, we do trust this node at least in the paper. So is there any accountability of that node? Do we even consider the possibility that the notary is going to lie? And then if we trust the notary, then what do validators do that the notary cannot? Okay. And this kind of brings us to the question is what do we need the validators for, really? And what do we expect the centralized system to achieve in permissioned setting? Okay. So one could be consensus where no node is more important than other. This is clearly not what is going on here. The other is the validators may be checking consistency of the ledger by keeping the notary in check for misreporting. Right. Again, this is not modeled in this paper. We assume that notary is always trustworthy. Okay. So what the validators seem to be doing is that they are aggregating some disparate information from outside of the ledger. So they behave as oracles. But then this is not really about the ledger keeper, about keeping of the ledger. This is about information, aggregating of the information. And we already have literature on that. They behave as oracles. Okay. And but whatever we expect from the decentralized system is we should not expect cost savings because by definition, there are redundancy and operations. So there must be many, many times that the validation needs to take place. And it is going to be costly. So we need to kind of figure out why do we want them to do that. Okay. So let me just take 30 seconds to show why this is a problem if we do not model those things at the same time for this particular game. So what the validators bring to this particular game? First, they validate the label. So they verify the label of producers which they read from the ledger, at least at the notary. And then they vote by sending the message to the ledger. So what wasn't this information already there in the ledger? And if they are not sending this message to the ledger, but instead sending the message to the early producer, then they are basically serving as a connection between the ledger and the real world and then back. But then this is a different task than maintaining the ledger. Okay. And then in the second phase of the validation, the production validation, they're supposed to verify where their production took place according to the plan. So do they have boots on the ground and they actually observe? Or do they just take the report of the producer signed by the producers? And do they have the same data or do they have a noisy signal that we need to aggregate? Right? And is this something that our trusted notary cannot observe directly? So in other words, what is the benefit of the redundancy? Okay. And what I'm claiming is that all those rules about the majority, how many nodes we need to validate may actually depend on what do we expect to gain from this distributed system. Okay. I am going to skip the last couple of comments and I'm happy to discuss them with the others on the side. But to sum up, it is a great paper, really a great analysis in detail of the incentive system to incentivize validator. And it is extremely important because it is getting us thinking about not only incentives and the optimal design of a particular permission system, but also about the challenges in analyzing permission systems itself. So not only is it already complicated, I'm claiming it's not complicated enough, sorry to the authors. But I think there is more to be learned, but we wouldn't be able to get there without the paper as it already is. So let's we stop here and thank you very much. Thanks a lot, Hannah. So I would like to give Hyun the opportunity to respond to those comments first and then we'll open it up for Q&A. Hyun. Well, that's a great discussion. I think that's really very deep. Because it's so deep, I'm going to just let my co-authors address that. So Raphael and Cyril, up to you. Hi, everybody. Thanks, Hannah, for the discussion. It was very interesting. So the way that we were thinking about it was that what you call the notary would actually be a program that would be contained on the ledger. And the program would actually just come votes. That's the way we would think about it. Now, maybe we need, I mean, in some sense, we need to go deeper and understand where this program is coming from, why do people agree to this program in the first place, but this we took it as given in this setup. The other thing was who communicates with the ledger with the validators. It's actually the producers themselves. And in particular, one idea of having a permissioned ledger is that you want to preserve anonymity and in particular have anonymity of history. And so when two producers meet, you actually don't want these producers to have access to the ledger. So that's why you have this extra, you know, to preserve anonymity of previous trades. And so that's why you have an additional, let's say, layer of validators. And these guys are preventing you to have access to the ledger, to the control access of the ledger. But, you know, it's great that, you know, there's still work to do because, you know, that's how research should go. So thanks for the comment. So I think your point about the central notary is a very good one. And, you know, as you know, some formalizations of the distributed ledger, these permissioned BLT actually have, you know, like Corder, you know, they, it actually has a, you know, single notary. And the reason for that actually also has to do with some of the legal underpinnings of what actually counts as a settlement. So, for example, in a securities transaction, you need a timestamp. And if you had many different notaries, then you had different timestamps. There are all kinds of legal problems that actually come from that. So, you know, there are other reasons why you would need a notary. I think, you know, our model probably, you know, can be specified in a way, you know, which would be consistent with the, taking into account the full variation. But we have to admit that, well, I certainly have to admit that I was taking it in a much more naive way that, you know, it was just a matter of voting. So I'd like to invite further questions. First of all, among the panelists, would anyone like to pose questions to John, Raphael and Cyril? I would. Please go ahead. So, it was really interesting paper and a really interesting set of comments there. It's fascinating to listen to. I think that the comments actually help us see that this is building up from a particular direction where the global games literature has been used quite a bit to think about banks. And I can see an underpinning of this story as being a story about banks and bank runs. That is to say, these guys are also thought to be thought of as the oracles who are the monitors watching out whether a bank is doing the right thing or not, and the majority vote is closing down the bank. And in that respect, I was wondering, reinterpreting it in that way, what's the new version, what's the additional elements in this story that build on that previous existing literature? And I guess the answer, I'm proposing, but I'd like to get your reaction to it, is that we're imagining here something about the corruptibility of the holders of very junior debt. That is, we want to have the guys out there holding junior debt as natural monitors of a bank, and we're worrying about the fact that we have too few of those natural monitors, they might be out there being bribed by the bank itself to give invalid reports. Have I got that right? Is there more than that in the story or what other pieces are missing? Charlie, I think the second part of the paper about corruptibility and the side payments, that's if you like building on top of the underlying coordination problem. I think the commonality there is the coordination problem, and the idea that if you have a global game formulation of a coordination problem, you would normally expect to see some inefficiency because of the radical strategic uncertainty that I mentioned. And so you have to give a lot of surplus to the people playing that coordination game in order for you to get the cooperative outcome. And this is why expecting unanimity is so if there is one very strong result that I just want to leave you with, expecting unanimity is just too much. The reverberant doubt, if you like, is just so if you recall this paper, this book by Douglas Hofstadter, Gerd Lescher Bach, he talked about reverberant doubt, where yes, there is a cooperative outcome there, but if you need everyone to do the right thing, and doing the right thing is costly, then you begin to wonder, even if I do the right thing, and even if all my neighbors, virtually all my neighbors do the right thing, am I really so sure that there isn't someone out there who wasn't paying attention and who might actually press the wrong button? And then if you catch yourself thinking that way, well maybe everyone else is having this same doubt about there's someone who wasn't paying attention, and that means that this kind of doubt reverberates. And the global game solution is just a formalization of that doubt. And you need a huge surplus to go to the cooperative players in order for you to overcome that doubt. And so in any case, so what this says is when you create that supermajority game, you had better not build in something like unanimity. You need something which is reasonable to overcome this reverberant doubt. How does the two-step version of the validation affect the game's structure? I mean the global game itself is relatively standard. So the global game is simply looking at the two steps of the verification, verifying the identity and then verifying the transaction. Yeah, I mean that's not really very important for the solution, but I think as Hannah has shown us, we need to be much more careful about how we model the verification process itself. And there's a whole sort of other layer that we have extracted from, but which would be important if we were to implement this in real life. Cyril, Raphael, do you want to come in? No, I think that's fine. Thanks. Yeah, same here. So I think we have room for one or two more questions. So who else among our panelists would like to pose a question? So just can you explain again the mechanics of the alpha, how it works, or like I mean delta was so important, delta is kind of proportional to alpha and kind of the gains, you know, the typically beta one minus beta because of the trigger strategy, but you didn't have time to explain the mechanics of that. Yeah, let me leave that to Cyril. So alpha is just a probability of the match. It's basically a surplus parameter. Sorry, I forgot. I think that the one that's being caught, I forgot the letter, but you know. Yeah, pi is the probability of being caught. Yeah. So it's basically, I mean, so pi has a very similar impact as the discount factor itself. So clearly if you're in an environment where you're likely to be caught, then of course you care much more about your future payoffs. So it's literally basically you lose the gains of trade because you could get excluded also and then you lose, and then you get the, so you lose the gains of trade times the probability times that are remaining time. That's the condition of the punishment that's, and then that's what they indexed the deltas with the different threads. Exactly. Yeah. So the punch line is that if you're in an environment where it's easy to get away with bribery and side payments like this, then you want the reassurance, you want the safeguard of having a distributed ledger system in order to have a more robust governance system. But typically if you look at the particular parameterizations, for most cases you're just much better off with a centralized ledger. It's only in these very special cases where you cannot guarantee good governance. This is when you have the case for distributed ledgers really coming in. I think that's a very nice way of wrapping up the arguments of the paper in general. I know that we have a hard stop for many people at the hour. So those of you who would wish to take that please go ahead. I understand that Cyril is able to stay on a bit longer and to answer any further questions of those who'd like to stay on for five or 10 minutes. So we're leaving. Thank you. And for those staying on, I'll hand the floor to Rod. I'm going to leave you in the capable hands of Cyril because actually I've got a hard stop now, so I better say goodbye. It's great to see everyone. Great to see everyone. Thanks for joining and thanks, Hannah, for the great discussion. Thanks, Jim. All right, so we'll turn now to Rod. Yeah, sure. I was just going to ask a question about the costs. So the costs in this framework, there's no proof of work, of course. So the costs are really just the verification costs of tracking what's in the ledger. And I presume these would be quite low, which I guess is good for the story. But I was wondering if it matters that these costs would be variable. So in the sense that unlike whereas in something like proof of work where they can sort of, the protocol automatically adjust to keep these costs aligned with what they want them to be, here the costs would be just what would they, what exactly? I mean, they're the costs of just keeping track of transactions in the ledger. I guess these would vary with the block sizes and with just technological improvements. So I guess my basic question is maybe a little bit more about these costs and what it would mean if they're not constant over time. Sarah, you want to go or I go? Go ahead. So I think it's reasonable to assume that in many systems we can think they're actually really low, right? They're really these costs of operating a node. And the stochastic element, we think about it as connectivity outages or something, right? And the good thing is that really like this global game set of solution methods works the best with very small costs and a noise that goes to zero. I'm not sure and that goes back to Charles's comments earlier, if we can also interpret them broadly as monitoring costs, where you really have a real effort to sort of monitor whether a production really happened and you write that that would be a bit of an outside ledger interpretation. I think you can build story where that could be the interpretation too. But ours is really in line with what you just said. Great. Okay, further questions from our panelists. So I don't see any more hands. One more Bruno, you're just waving goodbye. Bruno, you're muted. I'm going to unmute you quickly. Yes, thank you very much. So I'm sorry, I mixed up with times. I thought I was coming right in time. And in fact, I'm arriving at the end. Sorry. Anyhow, cost. It depends on what it is that you want to store on the blockchain. If you want to store, if the only thing you want to store on the blockchain is ownership of coins, the resources it takes to check validity are very relative, quite small. But if you would like to store a more complicated thing, like for example, smart contracts on the blockchain, then checking the validity would actually take significant resources. So maybe in the case of a money, of a currency, if we only look at the currency, the cost can be deemed to be small. But if you want to enlarge the set of things you want to register on the blockchain, then the cost can become larger. Cyril, would you like to respond to that or Raphael? It's a very good point. It shows Bruno Bia's mastership coming at the end of the seminar and answering the question. Wonderful. That's my question. Yeah, Ricardo, please. Go ahead. Yeah. I mean, the whole thing looks a little bit like a decentralized credit scoring. The system we have now is like we have like, I guess, private companies and they compete and people can pay attention to each one of them. Is this the same? I don't know. It's the first time I see it like this. Is this the same as the kind of decentralized record keeping? You know what I'm saying? I mean, it looks like it feels different to me. This is interesting, but I'm not sure it's the same as helping us settle transactions. No, no. This is the way we thought about coins being exchanged in the sense that we were thinking about it coming from Narayana's paper where if the label or the history is correct, I mean, is the history reflects good behavior in the past, then you have the right to trade. So in that sense, you know, this is a credit scoring economy if you want, but it was easier to think about it this way than to have coins being exchanged. But I think the mapping is possible. Yeah, so I think the place where I kind of lost a little bit, I was expecting something more Bitcoin-like where anonymity goes all the way. But here, you know, you're really not anonymous at all. You can be excluded. So that's kind of what the tension is with what I was expecting to see. Yeah. But that's the exact point of keeping history is that you can exclude people. Well, but you can start afresh with a new history. You know what I'm saying? It's different. They're not the same thing. In Bitcoin, you can start a new wallet. You can exclude a wallet maybe. Yeah. So here, so that's an interesting point in the sense that here we have agents, but you know, the way you should think about it is a wallet ID, right? So as soon as you have wallet that sort of misbehaves, then the wallet is going to be excluded. Except there's no, there's nothing in it. Yeah. So I mean, you can think about it as a proof of stake implementation that way. I see we have a question from Jacob Leshno as well. Jacob? Yeah. It's more of a comment than a question, but I think that's really like the old call interpretation makes a lot of sense to me. But if you think of like a ledger like Bitcoin, the validation really plays no role in Bitcoin because anybody who reads the ledger can just ignore blocks that violate the rules of the system. Bitcoin will work exactly in the same way. If you include all the blocks that are in violation of the rules inside the ledger, everybody can copy it and distribute it. It's just that the reading will just have to ignore the non-permissive box. Raphael Cyril, do you want to respond on that? Cyril, maybe you do. I had a connectivity problem. Well, so here we are really focusing on the permission protocol, not permission less like Bitcoin. So maybe just another related comment is that I understand that you want to do things that involve privacy and on disclosure of the information. And there are several ways to do that, one of which is to, that is fully decentralized. It's called zero-knowledge proofs. There are some companies that try to kind of build us now where you still use an open ledger where everybody can see the ledger and you just use topography to hide everything, except for the authenticity of the ledger. So for being vague, but it's basically topography magic allows it to provide anonymity while still basically using the same way that Bitcoin verifies transactions. And again, every third party can verify the legality of everything that's involved only the ledger. Of course, you need all the calls or somebody else to verify whether somebody really did well on something in external life. The ledger can only show the record is okay, doesn't say anything about the real life. Bruno, I think you have your hand up here. Yes, thank you. I wanted to come back to Ricardo's question, which I thought was very insightful. And so maybe what we, Ricardo is pointing that here, if I understand your, I've read the paper and that's why I'm talking about the paper. Here you guys, you're talking about central bank digital currency. You're not talking about the cryptocurrency like Bitcoin. And I think these are different animals if I'm not wrong. And that's why, while in Bitcoin, we have full anonymity in the case of a central bank digital currency, you don't necessarily have full anonymity. In fact, the central bank can very well demand to have some registration of the wallet owners. And so, and that comes to Ricardo's point, which is once you have this registration, then you are in a position to punish the guys if they do the wrong thing. Raphael Cyril, do you agree with that characterization? Well, yes, but in sort of, right, we don't think about it in a way that you can punish them with an exogenous mechanism, right? You only have the endogenous mechanism of having people, the participants in that system identified and having the threat of kicking them out. And that's what you work with. And we want to examine like how well does this system work in a centralized or in a decentralized manner? So I see Hannah, oh, I don't know if I agree, but I like it. Good. All right, we'll give Hannah the floor for the last question. Yes, so it is related to what Raphael just said. And I actually had a comment on it in my slide that I skipped. Yes, you assume that the only punishment that can be exerted, and by the way, the punishment is really for the validators. Pi is the punishment for the validators. Whereas there may be some punishment for the producers because they have bad credit, so nobody will want to trade with them. But who knows? But the punishment, Pi is only for the validators for taking gripes. And you assume that the only way to punish is to exclude them from future dealings. But if you know their identities, why can't you punish them more than this? Why can't you levy the fine? With high enough fine, you actually may prevent any incentive to take the bribe. Because you can and you can do it by holding in escrow a certain number of the validators benefits before releasing it. So this is what you mentioned as related to the proof of stake. But this actually would alleviate a lot of the issues that you have in the system and would take advantage of the fact that this is a permission system. You seem to be trying not to take advantage as much as you can of the permission system and tying your hands behind your back. Yes, so I think if you allow for the possibility for external punishment, then the easy, straightforward result is you can achieve first best with the central validator. Right? I think then you run a fully centralized system. That's the axiomatic result. No, but when you exclude them, it is also external punishment. Yeah, so here they are excluded. So this is the worst punishment you can get for these guys. So this is what you assume. Why wouldn't you be able to put fine for them? It's just as external as excluding somebody needs to exclude them. There is a mechanism, there is a centralized mechanism that gives the permission. So basically with capital, you would be able to achieve a stronger punishment because they would lose the capital and on top of that, they would be excluded. Yes. Okay. Thanks Anna. I think that these are these are good points and I think that there is ample material for discussion on these topics still. I'd like to thank everyone again for taking part with many thanks to the authors and also to our discussants. I understand that Todd will give us a few words to close. Yes, thanks John. And thanks again to Hewne, Cyril, Raphael, and Hannah for getting our new series off to a great start today. Thanks to everyone for participating. So you'll be able to find the slides as well as a link to the video of today's event on our website CBandc.net. There's a link to that site in the email you received with a link to today's event. And we hope you'll join us again next month, April 30th when the ECB will host our next event. Martin Schneider will present, Dirk Meepelt will be the discussant, and Katrina Austin-Macher will be our moderator. Until then, I hope everyone has a pleasant morning, afternoon, or evening depending on where you are. And thanks everybody. Thank you everybody. Thanks a lot. Good bye. Thank you very much too. Thanks. Thanks so much. Thank you. Yeah, thanks, Hannah. I think the discussion is fantastic. So, Russell, are we able to stop the YouTube stream?