 All right, so thank you everyone for being here today. I'm delighted to be here in London and just present you the recent research advances we have at IHK Research, in particular focusing on Uroboros protocol, which is the backbone of Cardano. So plan of the day is to go over all the important research streams we have, which cover the design of the protocol itself, state pools, incentives, side chains and many more. So there's a lot of stuff to pack in one hour, so I'm going to press ahead. Just a little bit for, as an attentive to the introduction, I'm Michael Oskar Jass at the University of Edinburgh, Chief Scientist at Input Output, and a lot of the work that you see here has been done with a lot of great researchers at IHK and also university partners. Some of them are that the Blockchain Technology Laboratory, which is a technology laboratory at the University of Edinburgh. Others are in universities in Europe and the US. So I'll just mention by name, Christian Ballerzer, Lars Burns, Taken Kutz, Peter Gazi, Alex Russell, Bernardo Davies, Elias Kucukias, Romari Nukov, Vasily Zikas, Katerini Stugan, Dionysius Zindros. All right, let's see if this can work. So here's the talk plan. So first I'll start with what is the overarching goal that we try to achieve in this type of research, which is to build robust transaction ledgers. I'll give some background on proof of stake, and then I'll go a little bit into more detail in how the Roboros protocol works. Then we're going to see how state pools can be formed in that system. What is the incentive structure of the protocol? Then I'll cover site change. So first step, robust transaction ledgers. So a robust transaction ledger is the problem with Bitcoin protocol solves. Now it's very important in computer science in general when you try to develop a new algorithm with a new protocol, you have to understand what is the problem you're trying to solve. So a lot of initial research on my side, also together with people that we collaborated on, was to understand what is the problem that we're trying to solve. So the early years of this research led to a number of papers trying to describe precisely what is the formal definition of a robust transaction ledger. So it was joint work, which was, we published actually initially in 2014. Then it was appeared formally in the proceedings of EuroCrip 2015. Now referred to as the GKL paper or the backbone paper. This was joint work with Wangarai and Nikos Leonardo's. That's the link of the paper. And there was a lot of follow up work that refined this model and the definitions. I'll just mention work I did with my PhD student, Jeroz Panagiotakos, where we defined additional properties, work done by Pasi Manashella, who studied partial synchrony in the same setting, and more recently worked by Barrettzer, Maurer, Chudi and Zikas, who studied simulation-based definitions for that. So all that boiled down to a body of literature that now describes well what is the problem to be solved. Now this is very useful and I have to say this is not about proof of stake. This is primarily serves as a formal definition of what we are trying to solve. Also it is a yardstick that could be used by anyone else that tries to solve the same problem. There is a formal framework and definitions that they can use to demonstrate that indeed their problems are solving the same problem. So this is another very important aspect that we advocate. It is very important to have formal definitions to describe precisely what is what we are trying to solve. So this work gave rise to two basic properties called persistence and liveness, which capture the two aspects that you would expect from a robust transaction ledger. Persistence asks for the fact that the ledger where transactions are recorded are immutable, is immutable, and transactions therefore persist. And liveness says that new transactions are incorporated in the ledger. So honest part is if they participate they will see their transactions are incorporated in the immutable ledger. So these are the two fundamental properties that you would expect from a robust transaction ledger. And you should be able to get those properties despite the fact that some of the parties may want to operate in a way so that these properties are violated. For instance some transactions are removed from the ledger, that would be a double spending attack, or some transactions that are posted never appear, that would be like censorship. So realizing the ledger was as expected satisfied by Bitcoin. Nevertheless in retrospect if you cast this in the wider body of literature that is known to computer scientists about this problem it was rather unexpected. So consensus actually as a problem was never considered in this setting. And some people actually may have even dismissed it as impossible. It's not possible to solve it in this setting, assuming mere honest majority. I'm not going to more details here about this. These are of interest to people that have studied the theory of distributed systems. But what's important here is that Bitcoin as a solution was early on recognized to have significant scalability and efficiency disadvantages. So the natural question that arises here is is it possible to realize the protocol in a more efficient way without compromising its basic assumptions. And here like what's important is exactly this type of definitions that I showed you before. Once you have the definitions once you know what you want to achieve you can throw away the protocol and start from scratch. Clean slate. Is it possible to come up with a different protocol? Perhaps a different protocol altogether that actually solves the same question. And many people have thought about that. And one important type of solution that emerged it was the proof of stake one. So this brings me to some background on POS. Well certainly POS would be the central of this talk. So generating the next block in Bitcoin is a little bit like an election. It's essentially a randomized process that is going to elect one particular participant to issue the next block. Now there is a wide body of literature in what computer scientists call leader election protocols. But Bitcoin solves this in its own unique way. So something that happens is that there are these parties that engage in the protocol. They try to solve the cryptographic puzzle and they are likely to be proportional to its hashing power. So what happens in proof of stake is in some sense a parallel to this process. So instead of having hashing power as if you want the main way that you participate in this process, in this lottery if you like to call it, in this way you substitute hashing power with stake. And now stake is the virtual resource that is recorded in the ledger itself. So instead of miners now you have stakeholders which are identified in the ledger and then there is a randomized process that is going to elect the next miner to produce the block based on weights according to the stake that each of the participants has recorded in the ledger. So the protocol began somehow a bit more self-referential. The ledger itself determines what are the weights and the steps that have to be undertaken in order to produce the next block. So number of approaches have appeared in the literature or proposed as part of systems. I'll just mention a few but it's important to categorize them in two broad groups. One of them is you can call it essentially proof of stake blockchain which means that the protocol uses a hash chain and some type of longest chain rule. So this means that the protocol in some to some degree mimics the Bitcoin blockchain protocol nevertheless removes the proof-of-work component and has a proof-of-stake component. The other class of protocols is upgrading results from classical visiting full-tolerant protocols and recent to the POS setting. So there are many protocols that have been proposed in these categories. Some of them are actually drawing ideas from both of them. So the categorization I'm doing is let's say first principles. Now going back in history in the discussion of this problem there is a folklore in people that follow the Bitcoin space that it's impossible to write a this robust transaction ledger protocol following the logic that is in the Bitcoin setting. There are many reasons for that is the first one called costless simulation. The other is longer-range attacks. So costless simulation refers to the fact that there are no physical resources that are used in producing blocks. So it is possible for someone that operates the protocol to invest quote-unquote in every possible alternative history or every possible execution of the protocol. Assuming of course that those are not exponentially many. But in principle they can invest in multiple one of them. And this will effectively incur no cost. The distinction here based on cost is the fact that in Bitcoin when you are extending one particular version of history you have to commit to it because you effectively take it and hash it into the proof-of-work instance you are trying to solve now. So once you find the proof-of-work instance that proof-of-work instance carries inside it the history of the protocol that you have committed to. So when you transmit it you are effectively also transmitting the history that you committed. So of course you do not have to commit to one of them. You can say okay I'm gonna try to commit to let's say two alternative histories. But that means because you have to do this hashing of the history and then attempt to solve proof-of-work that if you have some mining equipment you have to spend a certain percentage of that extending one history and another the remaining let's say the remaining extending the other version of history. That would not be the case here. The virtual resource you have appears valid in both versions of history and in some sense is doubled. So you could potentially indistinguishably follow version of history A and version of history B then find out which one let's say is the most agreeable to you. Let's say maybe you get more rewards in one version of history compared to another and then publish this one. So this is a fundamental difference and there is nothing that can be done to change that. This is exactly the nature of a POS protocol. So any POS protocol will have this will enable such nothing at stake type of behavior. The question though is even though this is a problem is this something that kills the whole approach or is it possible to mitigate this type of behavior with a clever protocol design. So that was one problem. So another problem that was identified is what's called long-range attack. Now in a long-range attack what happens is that imagine you are a node that you are joining the network at some point let's say way after the network has started and you are confronted with a certain history or perhaps you have a few alternatives. The protocol should enable you to choose the right history and what is right now by definition would be the one that is followed by the majority of participants in the system. Of course you cannot exclude like small splinter groups to follow any history they want but the point is that a newly joining node should be able to find the protocol history that corresponds to the majority and the bootstrapping problem which associated with that is exactly this question is it possible for a new node to bootstrap the protocol without having any assistance or prior knowledge about what's going on in the network. Now this is particularly important because you don't want nodes that let's say that go offline to have to use a trusted party to help them become initialized to the right to the right history. So just to put a picture on that here is the new party which joins the protocol he tries to distinguish between two histories and here is like what happens in proof of work. So what happens in proof of work is that the adversarial version would be substantially shorter counting difficulty as length. So what happens in essentially the proof of work setting it's possible for a new party to figure out the right history because it's the one that's going to have the most accumulated difficulty. So that's a unique characteristic or unique characteristic that comes with proof of work exactly because you count the amount of work that has been invested to particular history. So this problem is resolved and now you can categorize all these issues to something that we call dynamic availability. So dynamic availability setting is an environment where parties join and live at will. The number of online offline parties dynamically change over time lose clocks and chronization network connection and the protocol does not have any a priori knowledge of participation level. So it does not know let's say that at time X there are that many parties active. So all these characteristics are the hallmarks of how Bitcoin works and what actually we would like is to solve this question. Essentially design a pure proof of stake question that operates in the dynamic availability setting. So the protocol has persistence and liveness as long as the adversaries the minority of stake and furthermore make an argument that as the protocol following the protocol as prescribed is aligned with the partisan centers. So this is a question that we try to address with the Ouroboros research. Now I'll just make a very very quick overview of how Ouroboros proof of stake works. So Ouroboros was presented in crypto 17 with the first probably secure POS blockchain. So there were many proof of stake protocols proposed before Ouroboros. What was unique about this protocol is that we didn't set out to design a protocol a blocked a POS blocks in protocol. We set out to develop a POS blocks in protocol together with a proof that the protocol met the objective of realizing a robust transaction ledger. So the proof of security and the protocol itself were two goals pursued in tandem together. It was not that the protocol was proposed and then we tried to find a proof. The proof arguments in the protocol itself were developed together exactly with the intention of being able to present an argument that the protocol can actually be a convincing substitute of a POW blocks in protocol. So there were a number of open questions that left from this research. So that was an initial version of our POS blockchain research. A number of things that were left as open was first that the protocol employed a random beacon generator that was based on publicly favorable secret sharing. This came with a substantial performance penalty. And also the type of security is something that in crypto cryptography and security literature is called semi-adaptive. Now this term basically refers to what is allowed to an adversary when he tries to subvert the protocol. Is the adversary is allowed to make decisions on the spot or does he have to wait? So this semi-adaptivity suggests that the adversary has to wait. Without going to further details, it's something that you would like to get rid of. Even though it's quite standard in cryptographic and security protocol design, exactly because it's sometimes quite frequently an easier goal to achieve. We took care of this adaptive security in our next version of the protocol called Roboros-Prowse, which appeared in Eurocrypt 2018. This is another cryptography conference that came after crypto. And there also we improved on the performance of the beacon. And finally the new version of the protocol, which we just released publicly as a technical report just a bit over a month ago called Roboros- Genesis, entertains a feature that enables parties to bootstrap from Genesis, thus addressing this issue of dynamic availability, which I mentioned before. So let's understand very briefly the protocol. Maybe some of you have seen already, presentations or descriptions of the protocol over at the paper. I'll very briefly go over what the protocol does just to refresh your memory. So as I mentioned already the protocol was designed together with the proof that demonstrates its robust transaction ledger. And the proof strategy involves properties of the underlying blockchain data structure. So these are properties that we have established in prior work that was focusing on the Bitcoin blockchain, common prefix chain called in chain growth. The honest parties are paired with an adversary who tries to subvert the protocol. And there are certain privileges that the adversary enjoys, such as network dominance, completely can schedule messages, can deliver the messages any way he wants, he can act after all the honest parties act and so forth. So when you approach a protocol like Roboros, it's helpful to think about it designed in three stages. This is actually the approach that we also did in the first paper and we followed henceforth. So in stage one what we try to solve is a very small snapshot of the whole system execution. In that snapshot you can assume that stake of the participant is fixed. So what you ask is the following. Let me fix the stake of all participants and I'm asking the question is it possible to generate an opportunity for the blockchain to advance by a little bit so that the basic properties of the blockchain can be satisfied. And these are common prefix chain quality, chain growth. So in other words there's going to be a large common prefix in the chain that all the participants have, their chain will grow and it will contain some honest blocks. That's the chain quality aspect. So once we have this, the key step is to show that if the small single snapshot works and the blockchain can be extended by a little bit, then what we do is we show that if there was a random beacon that emitted a publicly verifiable and available random string then it would be possible to bootstrap this protocol and then do another segment. This could be something like sequentially repeating the same process with the randomness refreshing the participant's schedule. So this is the stage two of the protocol using an imaginary random beacon. And then at stage three of the protocol you show how the protocol can actually take some additional steps. The protocol participants can take some additional steps and simulate that random beacon themselves. So thus we do not have to have anything external and the protocol itself can proceed with just the randomness coming from the Genesis block. So here is a small schematic representation. In the static stake segment what happens is that time is divided in slots or small time units. Some of these slots are empty like the one you see here at position four. In the other cases you have a slot where a certain stakeholder is elected and is in a verifiable manner eligible to issue a block. Now the randomness that elects that stakeholder comes from this seed randomness which in this static setting you can just assume that it's built in the Genesis block. So what happens is that this stakeholder is elected, issues a block, this block contains a set of transactions, it is signed and it's signed in a way that this particular stakeholder let's call him L2 can convince anyone else this is properly constructed. Now this block contains a link to the Genesis block in this way forming a hash chain. Now this continues in this way with some slots missed and some other slots being silent because no stakeholder is elected. It's also possible that some slots may have multiple stakeholders issuing a block. The protocol should have a way for resolving these issues. So what you see is a first stage which produces what I showed in the previous slide and then there is a beacon with seeds again the random string that you have here. Now what happens now is that the stakeholder distribution that you had on is originally which was built in the Genesis block is being drawn now from the blocks and itself and reinitializes the protocol. Now the key trick here is that the whole process repeats. Now this structure is also fundamental in arguing that the protocol is secure. So using this if you want recursive structure was one of the ideas that helped present a proof argument that the protocol works. So now the same thing is gonna process towards the third stage is the beacon then again appears randomness a random string is added here and the stakeholder distribution is drawn taking into account all transactions that have taken place at this period. So these large periods are called epochs and they are the basic let's say building blocks in the execution sequence of the protocol. Finally using some cryptographic technique the beacon has to be implemented and essentially the protocol participants themselves will have to produce that value as the protocol advances. So this is the high level overview of how the protocol works. Now what I haven't told you is how the parties resolve these agreements between them. So in other words when they see two alternative hash chains how do they pick one. So the rule that Roboros Genesis follows is the following. It has two types of comparisons. The first comparison is a short-range comparison. If the chains that a protocol participant considers they diverge just a little bit up to k blocks where k is a parameter they just follow the longest chain. Nevertheless for long-range comparisons they don't follow a longest chain approach and rather use a plenitude approach to pick the right chain. So I'll explain a little bit what is this plenitude approach that we have in Roboros Genesis. So what happens is that when a participant considers two chains that are diverged and they diverge quite deeply then it considers their forking point and then focuses on a certain region which is after the split. So the party is going to follow the chain that has the bigger density of blocks over the time domain. So the intuition behind this is that this density of blocks in the time domain suggests an evidence of higher participation. So this is exactly the idea and what we prove is that block chains which are produced by the adversary will exhibit a less dense block distribution and this is why the protocol can be secure despite the fact that the parties will have no other advice when they join the protocol beyond the Genesis block. Now what is particularly exciting about this is that this feature is not present in any of the other POS blockchain protocols that either used a trusted checkpoint or moving checkpoint or some other information about the participation of stakeholders that engage in the protocol. So that was the crash course on how Roboros Proof of Stake works. So now let me come to some of the important research streams that take the Roboros protocol and make it suitable for a backbone of a cryptocurrency like Cardano. The first step is stake pools. So the an important challenge in POS protocols is that well stakeholders themselves they have to be online and engage in the protocol execution. Well in some sense you were expecting that. That was the whole idea at the beginning. The stakeholders of the protocol are the one that hold the coin and thus they are the ones that will will run the protocol. So in some sense you could say well that's not a problem. You know it was there all along. Now is the whole idea. Nevertheless this is not a good way to approach that. I mean after all just because you hold currency or you hold coins in a certain system does not mean that well you would like to participate in maintaining its ledger. I mean yes you're interested in the currency itself since you possess a certain amount of it. This doesn't mean that you have the ability to run a service that is going to participate in this protocol. So if you look at Bitcoin you can see that there is a clear decoupling between these two roles and the decoupling that I'm referring to is the fact that miners and coin holders are not the same set. I mean clearly miners hold some coin. I mean after all they are participating in mining because they do earn Bitcoin. So they do earn they are sort of coin holders but certainly it's not the other way around. I mean if you hold Bitcoin doesn't mean you do mining and you may even hold Bitcoin without participating or even observing the protocol at all. You would even have Bitcoin stored in a cold wallet, a paper wallet and you'll just not engage in using the currency at all. So is it possible to address this? I mean this is a real concern. The problem here is that if this is not addressed you may run into a situation where well you have only a small percentage of the stakeholders actually being interested in participating in the protocol. Let's say 10%. So then you have another 90% that even though they're interested in holding the currency they are not effectively participating in the protocol in any meaningful way and this creates a disparity that it would be good if it's addressed. So an idea that's capable of addressing this is this concept of a stake pool. To understand what happens let's look a little bit more deeply into a decomposition of a POS address like the address that is implemented for Uroboros or actually an enhanced version of it which will be implemented soon. So what you see here is an address that features a payment verification key and a staking verification key. What's important to observe here is a duality of the keys that are found in an address. Now this is unique to a POS protocol. Your balance in a POS protocol has a dual functionality. It's the coins that you would like to spend but it's also the stake you have for participating in the protocol. So of course cryptographically speaking it's possible to use the same key for both of these operations but this has some fundamental disadvantages. The most important disadvantage is that the key that you use for staking or the key that you use to participate in the protocol needs in one way or another be a hot key so to speak. So it can't be something that let's say you can leave it like in a cold wallet without having it connect to the internet. And perhaps for some people that's not an issue. After all like you know some people may not even have a key. They may have all their money or their coins in an exchange address so that's they don't have the issue of like maintaining a key. But for others is a real issue. So a way to address this is have two independent keys. One key is for payments or the other key is for staking. So this creates an address that has this like double structure. There is a hash of the payment key and a hash of the staking key. Now the staking itself we don't call it a key but it's rather a staking object. Because what we want to achieve is to have a three different features for that key. One what we call the base address is a standard address where it has an independent payment key and independent staking key. But another type of address we call a pointer address. So a pointer address is an address that does not have an independent staking key. Instead it points and thus inherits its staking key from another address. And finally we have another another address we call an enterprise address which doesn't have a staking key at all. So basically this enables someone to withdraw themselves from staking all together. Because for instance they may be not legible to do that because of their own reasons. For example they're not allowed to profit from the staking. So staking keys allow for participation in POS. But we can use them for more things and this is what we do for stake pools. So a staking key can create a pool creation certificate. So such certificate can be signed by multiple keys and will be attached in the blockchain. And delegating the stake associated with a staking key to a pool is another operation that we can do with the staking key. So these are the three main operations that we will derive from staking keys. Participating in the protocol, creating a pool and delegating to a particular pool. So let's look a little bit about using these addresses. Let's look at the wallet footprints that you expect to see in the blockchain once you are in this operation. So on the upper right you will see this case of an enterprise address which I mentioned. So enterprise addresses do not have a staking at all. So they do not participate in staking, they are not capable of participating in the staking process. In the lower right what you have is a wallet that only creates base addresses. So base addresses have an independent staking key. There is a good advantage in having base addresses which is mainly privacy. So every address, two addresses coming from the same wallet will be indistinguishable to addresses that come from another wallet. So in this way you can have a higher degree of privacy. Nevertheless there's going to be one particular disadvantage. Staking from such a wallet will require more effort on the side of the user and its wallet. And finally the sort of normal mode of operation is the one you see there. There's a base address and a number of pointer addresses. So pointer addresses they just point to the staking key of the base address and thus using the staking key requires only to use the single address staking key. So here is an example of creating a stake pool. So a stake pool is for those of you that are familiar let's say with digital signatures you can think of it as a top-level certification authority like a certificate authority signing key. So a self-signed certificate if you want. It's not exactly that but you can think about it like that. A stake pool creation certificate is essentially naming the pool, determining its basic features perhaps some information about the pool and some details about how the pool manages its stake pool members which I'll cover in a moment and it is signed by a number of staking keys. Now the staking keys that sign it may come from a base address which has a bunch of pointer addresses or may come from base addresses like that that have no pointer address associated with them. Whenever you have a base address you have some stake associated with it for example in this case you have one ADA here, one ADA, two ADA and here when you have pointer addresses you also have some ADA associated with them. The amount of stake that stands behind a stake pool creation certificate is the sum of all that. So in this particular case it would be seven ADA. So here you have a stake pool creation certificate that is backed up by a certain amount of ADA which is seven which is as many as these addresses are provided here. So when you're joining a stake pool on the other hand you do the following. The stake pool creation certificate there on the upper right just the red part is on the blockchain and now you see that and you say okay I would like to delegate my stake to that stake pool. So what I do now is a similar process as before. I'm using my staking key to sign a delegation certificate that references that stake pool creation certificate. So what you see here is for example a base address and three associated pointer addresses which are in total backed up by four ADA creating a delegation certificate that refers to that stake pool. Then you have a delegation certificate that's issued by the base address on the lower right then all that go to the stake pool on top right. Stake pool initially was backed up by seven ADA but now it has 13 ADA because it has two pool members those that you see there. So what happens now is effectively the following. You have a set of entities that created a pool let's say backing it up with 20 ADA. They collected some delegates that assigned their stake to the pool enhancing the stake pool into 90 ADA and then that stake pool in particular the people associated with the pool here will be responsible for running a full node in the Roboros blockchain protocol and they will participate but they will participate effectively like having 90 ADA. That's gonna work in the following manner whenever one of those stakeholders is elected to participate in the protocol that entity is going to act on its behalf by directly referring to that sequence of certificates that exist in the blockchain. Again pointing back to the blockchain to the particular location where you have the stake creation certificate and the delegation certificate to that to that pool. So what are our main challenges? So here's that was the delegation mechanism how do you create stake pools? So there are two very basic challenges here. So first of all a main challenge is that the stakeholders may aggregate to a single or a few pools. There's an obvious disadvantage here that the system becomes decentralized. So it is therefore quite important to think how is it possible to prevent that. Another problem is civil attacks. So this class of attacks they're well-known in cybersecurity and they refer to a situation where you have a single actor creating multiple identities. They appear in the eyes of the system and other participants as multiple different identities but in reality is just a single actor. In this particular case in this particular instance of civil attack you would have a single actor creating multiple pools perhaps with different websites etc etc but they're all controlled by you know same entity. I mean in both cases the problem is that the system becomes centralized and perhaps in the second one is even worse because at least in the first one the system becomes centralized and it looks centralized also. Okay in the second one is even worse it's centralized but it doesn't even look like centralized. So it may look sort of deceivingly distributed. Which brings me to incentives. So how is it possible to address these issues? Now these issues they have to do with the way that participants engage with the protocol. So they are not pure cryptographic in manner in a sense but they're also game theoretic. So let's review the basic stake pool tasks. So what is a stake pool is supposed to do? It has to be online to carry out the basic protocol operations. It has to check if the stake pool member is elected in a slot, an issue a block on their behalf. They also have to collect and relate transactions to other nodes. So there are certain you know tasks that have to be carried out by by a stake pool and essentially both of them do require running a full node in a server that is guaranteed to have a good uptime and connectivity with the network. Furthermore it should like collect transactions, relay them and so forth and and check the progress of the protocol. So what we want is to design a reward scheme that will incentivize the parties to follow the protocol. In this particular case the parties that we are interested in are all the stakeholders, some of them becoming stake pool leaders and some of them becoming third delegates. So designing a mechanism here refers to a situation where as the protocol advances there are certain events that are happening in the ledger itself that trigger a certain reward to be given to one or more of the participants. So for instance looking at the Bitcoin blockchain there is a reward scheme in place that rewards the miner that issues a new block with some new Bitcoin and transaction fees which are collected from the from the current block. Now understanding why this mechanism is there is a very fundamental question. So this mechanism is there to create the right set of incentives for the participants to run the protocol. There is a long running debate about whether that mechanism that is in Bitcoin is a good mechanism. From a theoretical point of view there are extreme deficiencies. One important one highlighted by this class of attacks called selfish mining attacks suggests that this mechanism cannot be an equilibrium because it makes sense when all the other participants are following this mechanism that you your mining pool if you want does something slightly different for example withholds a block and that will create an opportunity for you at least in the short term to gain more than other pools. Actually this can also be generalized and seen also in a longer term. Now while this is not happening as it is right now or at least is not happening in a way that we can detect the question remains we do need a better understanding of the incentive mechanisms behind these protocols and we cannot use with with any relative certainty the reward mechanism that is in Bitcoin. So the desired feature of reward scheme is that participants payoffs from the mechanism should be such that they should not want to deviate from the protocol. Assuming of course they are rational and we have to accept a certain model of rationality here in order to be able even to articulate such an argument. So furthermore the way the reward scheme works and that's like an additional and very important consideration should promote certain configurations and should exclude others. So for example the configuration where all the pools just become centralized into a single one is better if it is avoided. So how do we design a mechanism that is capable of doing that? So we have to give rewards. Any reward scheme should have a starting point. There have to be rewards that will be distributed to the participants. And this is also I should say the approach that we've taken. There is also a type of negative approach that you can take to that because well rewards are what are they after all? They are negative penalties. So an alternative approach is just penalize and this is an approach for example that has also been explored in the general space of designing these mechanisms. For instance the lashing conditions in the CASPER protocol is a type of penalty. But here we see the rewards. So where we're going to take these rewards? Where transaction fees is a standard source and funds drawn from some reserve. As you know there is an ADAR reserve which has one of the roles to play in this incentivization process. So some funds drawn from the ADAR reserve will be used to reward the participants that are running the protocol. So the reward scheme then will have the job to split the reward pool in a stable manner and create the stakeholders. So the reward scheme we are going to employ in Roboros, something that's consistent with the observations that were in the original paper in the crypto paper, is epoch based. So instead of taking the approach to issue rewards in every block instead what happens is the following. As you remember from the high level description what happens the protocol advances in epochs. A few blocks are created, the random is generated, you reseed the random string of the epoch and you advance. At that moment you stand, you see what happens and you say okay for that period there are certain rewards that have to be distributed. And the reward scheme will take those rewards and distribute them in a proper way. Given that in the Cardano implementation a slot last 20 seconds an epoch contains 21,600 slots. This is five days, every five days they're going to be rewards. So some of these rewards will come from transaction fees. So here's a reminder of how transaction fees are determined in Cardano. So the minimal fees that you have this linear formula where A and B are constants and the size is the transaction size in bytes. So essentially what you have is this linear formula that you have to pay more with the size of its transaction. So here's a worked out example. Now what's important here is that both of these features were selected for the following reason. On the one side you have the A component which basically says that every transaction no matter how small has a minimum amount to be paid. And if you want this is a basically now the service type of protection. On the other hand the more the bigger transaction is the more you have to pay and that's the linear component B time size. At the same time the other source where you expect to get rewards or where we're going to draw rewards for the reward scheme is the ADA reserves. I mean as you know the supply of ADA today which is circulating is 31 million. The maximum supply is 45 million so there is billion I should say. So there are 14 billion reserve. So the reward scheme is a type of function that when you look at the end of the epoch you will see a distribution of all the stake assigned to pools. So imagine now you're looking at the end of an epoch and what you see is some stake pool creation certificates and some delegation certificates that assign certain stake to an epoch. So a pool that let's say is 12 percent of the total stake can make a claim for a certain amount of rewards that is determined by the reward scheme and is a function among other things which I will explain in the moment of the 12 percent. It will not be 12 percent of the rewards and that's important. I will explain why. One first circumstance that it may not be is that the claim for the reward will only be honored assuming the pool is operating well. Operating well means that it participates in the protocol. Of course we cannot be certain about what the pool does not do and is not reflected in the blockchain but what can be sure about is whether the pool has participated at a certain moment. So in Euroboros from the crypto paper as it is implemented in Cardano right now it is possible to know that a certain pool missed the slot that was elected and was supposed to participate. So this is one reason that you may not honor a specific claim to rewards. Now an important feature of the mechanism itself is this fencing that the mechanism does. What you see here is that the function is a function of 12 percent but a pool losing its rewards will not cause other pools to get more. So essentially if for whatever reason the mechanism deems that that certain pool is not allowed to get this 12 percent or at this 12 percent should be tempered and reduced according to a certain formula this does not mean that the other participants will be able to get more. And this type of fencing mechanism is key to argue that selfish mining will not be able to help you improve your position if you are a stake pool. So in other words if you engage in block withholding for instance trying to make sure that certain transactions don't make it in the ledger potentially giving you an advantage this will not change your position with a respect to rewards. So let me now come to one of the most critical feature of the reward mechanism. So a critical feature of the reward mechanism is that we would want to limit the number of pools created to a certain number. Now this is important for ensuring that the number of pools is first of all we would like them to be big enough so that there is sufficient decentralization but also that they are not that big as to make their purpose new. So what we want is to set a target number and that target number is going to be the natural number that the system is going to converge to. So what we investigate in the reward scheme space design space is is it possible to set up a reward mechanism that's parametrized by a certain parameter k the desired target number of pools and then the free rational behavior of the participants is going to converge to that number of pools. Because that number of pools let's say k is a hundred is going to give a natural decentralization in the system that is also going to be collective enough for the system to operate fast. So a simple way that you can think of achieving something like that is to temper rewards. What we do is that the amount of rewards that the pool gets will start to decrease if the size of the pool gets larger. We don't want the pool to get more rewards if it gets bigger. So we would like the pool to start gaining rewards if it's small and if it reaches a certain size we would like to stop gaining rewards. Now this mechanism will ensure that if a pool gets too popular for whatever reason the participants that engage in the pool the stake pool leader is very good at advertising the pool and so forth. This will not lead to a popularity contest that can make certain pools very large. So a simple way to achieve that is to say that as the pool gets larger the rewards will start to diminish and it will be a certain point after which the reward will stop increasing. So if you are a prospective stakeholder that would like to delegate to a pool would not make sense to delegate to that pool because you will get less. Instead you will try to find a smaller pool and participate in that. So here is a simple example where you see stake pools A and B with stake 0.3% and 1.2% respectively and if you do a cut at 1% then you would say that if a pool has 1.2% will just receive rewards as if it had just 1%. So it will be not rational to delegate to that pool if you see that it has exceeded 1%. Instead it makes sense to make a new pool. So this policy will prevent stake pools from growing too large. Another feature of the world scheme that already mentioned is penalizing downtime. If slots are missed which is something that can be detected in the Roboros blockchain there will be penalties in the form of not receiving rewards and such penalties may last for certain periods. In this way participants stakeholders that have delegated to pools that are not operating well they will be given the opportunity to move their stake to other pools. So these penalties will not affect the rewards of other pools. That's an important feature which I mentioned already which basically means that you will not be able to create a situation where certain penalty will be incurred to another pool just because a certain pool has missed its own rewards. So let's say now the rewards distribution function. So the reward scheme is meant to operate automatically. The mechanism is not going to operate in a way that state pool members or state pool leaders are going to reward themselves state pool members. As for example in the way it happens in Bitcoin mining. Instead the system itself is going to redistribute rewards according to a predetermined mechanism. So state pool members will receive rewards automatically by the system in the form of credit to their accounts or active UTXOs. So a pool leader will declare a cost and a profit margin pool members delegate their stake to the pool and then the distribution function will split the pool's rewards taking it to account cost margin and stake. So ensuring that the pool leader will cover its costs and will make some profit while guaranteeing at the same time that pool members will receive rewards according to the stake they have committed. So all this calculation will be public and implemented as part of the blockchain itself. So how do we design this mechanism? So also you just an example of some of the experiments that we're running to understand the rational behavior of actors in such a system. So without going into more detail what we're examining right now is different functions that should exhibit the following behaviors. If the rational actors are left themselves to decide what to do the whole system should converge to a predetermined number of pools. So all these experiments they start with a candidate function and a distribution of stake which is synthetic distributed created here following a Pareto so-called distribution and then what we have in the experiment is a simulation of rational actors engaging in the protocol delegating and creating pools freely trying to optimize their utility. And what we want to achieve is a behavior like that. So here you see pools that are created and then they maintain a stake over time. It's one of them roughly at the same level. So here's an example over rather stable run and here's an example of a bad run. So this is a bad function. The previous one was a good function. So what makes this function bad is as you see here what happens is that multiple pools created. So here's the number of pools. By the way here's the number of pools. Actually this hits the maximum so basically everyone has a stake pool. Then the system does what is supposed to do. So the pools are dying and stake pools are formed which are larger. And as the system shows that it's going well and sort of tries to stabilize then it gets destabilized which means that there is certain action of one of the participants that triggers a race again to creating more stake pools. So here is an unstable distribution. So what we've done this game 30 analysis which will be available soon to see it in its entirety is that we try to see through possible functions and develop a reward, a stake pool reward scheme that has all the right properties that we need. Most importantly that it gives a stable distribution of stake pools that every stake pool has about the same amount of stake. None of them is too big or too small. So I don't have the time to tell you more about that and I'll try to wrap up very very soon. So I want to say a few things about sidechains. So sidechains in another very important research stream for Roboros. A sidechain basically is the mechanism that builds communication channels between blockchains. Here is for example the Bitcoin blockchain and if the Ethereum was acting as a sidechain of Bitcoin would be a certain event that happening in Bitcoin that creates a possibility to react to that event in the other blockchain. So these are extremely useful mechanisms. Sometimes they're called paging mechanism between blockchains that enable a connection to be made and essentially assets to be transferred from one blockchain to another. So what you want to achieve is what you might call the sidechain participation independence which essentially says that the stakeholders that participate in this sidechain, the stakeholders that participate in the other sidechain are not the same set. If they are exactly the same set you understand that's not a problem. It's a trivial problem. Essentially you have two blockchains which are maintained by the same set of parties. It's a trivial to facilitate transfers of assets between the two or at least. It's just an engineering question but it becomes like a very difficult question when the set of stakeholders are not the same. So a particular instance of what you might call a first generation sidechains is the star structure one where basically you have a main chain and then you have sidechains which are associated with the main chain. The assumption here of the star structure is that all stakeholders follow the main chain but arbitrary such as of them follow sidechain. So a star structure sidechain system if you want to call it like that it's easier it's an easier instance of the problem to solve and this is what we are addressing right now as a first generation sidechain system for Uroboros and Cardano. So essentially just to cast it in the from the Cardano perspective main chain will be the settlement layer and then you have multiple sidechains which provide enhanced operation. For example the computational layer is one of them but you can think of many different ones. Very importantly also you may not have a single computational layer and we won't have a single computational layer. There will be multiple ones that can coexist and then you can choose to transfer your assets from different sidechains that will be around computational layers. So the first one the testnet for the K-framework based Ethereum Virtual Machine was launched just recently almost two weeks ago and then we have Jelle which is a register-based virtual machine as opposed to stack-based one as in the case of the Ethereum Virtual Machine and Plutus which is a functional programming language based. So sidechains in Uroboros rely on a cryptographic primitive called Fresco Multisignature which is one of the cryptographic primaries that we develop specifically for supporting sidechains. And essentially and what's the key operation of this primitive is that they allow stakeholders of a sidechain to succinctly signal to the main chain maintenance the status of a sidechain. The key point here is succinctly. What we want basically is that the footprint of a sidechain to be as small as possible in the main chain. So by having this primitive we can allow incoming and outgoing transactions to facilitate across sidechains. So this brings me to the end of the talk. Thank you very much for your attention. I'll be happy to take questions. Two questions. One of them is can you give us some intuition behind why the majority chain should always be the denser chain? And the second one is in the end is there any way to address the symbol attacks? Yeah. Great questions. Great questions both of them. So some intuition about the density rule. So participation in Uroboros is based on electing based on stake. So imagine that you have a urn or a box with red and blue balls. So if the let's say blue balls are just bigger in number, let's say 51 versus 49, what you hope and expect is that drawing from this box over a long period of time will show sort of a bigger number of color blue in the balls you've drawn, right? So this is a crude, you know, this is just a crude example, but just like it's a starting point to think about this. So somehow we hope that this argument will extend to the time domain as we experience the blockchain as it is being advanced as it is advancing that somehow the participants that are elected to issue blocks in the main chain, or if you want the chain that most parties follow will be more frequent. Now this is not going to be true for sort of all segments of that chain. So for instance, it might be a possible attack that is still feasible. It's not going to be an attack per se against the Uroboros Genesis, but a possible attack here is the one that says I can mine some initial blocks on my own. This might be sparse and then certain good event happens and then let's say I'm elected all the time, then it's going to be very very dense. So I'm going to be elected all the time thus all blocks will be with me. But still there will be this initial sequence that the chain is going to be sparse. So that's the main intuition here. So the main intuition is that the adversary, which is like the 49% let's say adversary, is operating at some disadvantage. He's still not elected sufficiently enough over any particular period of time. So there will be a certain point that he will be that his chain is going to be more sparse and we could use that to distinguish. So that's an intuition here of what's happening in the case between two block chains, one produced by the adversary, let's say, and another by the honest parties. Okay, so now the next part of the question was what do we do about civil attacks? Indeed, what I showed you here is nothing. I posed the problem of civil attacks, but I didn't tell you how we address it. It is very important actually that this is addressed and the mechanism that I showed you ensures that a stake pool will not become too large. But in the slides nothing prevented someone do the following. A creative pool and then another pool and then another pool and then another pool. Perhaps all of them can become large and you may end up in a situation where you have let's say this target number of pools all produced by a single entity. So what we're going to do to address this is to provide a small slide, but still non negligible, advantage to stakeholders that start a pool with substantial stake. So for example, if I am a stake pool leader that I start with one ADA and you are let's say a group of stake pool of stakeholders that start a pool with 10 ADA, this is going to have a positive payoff for your pool. So it will even though I will be able, if I have 10 ADA to create 10 pools, I will end up with less money compared to the case that I would have 10 different pools with one ADA each. Is that clear as a concept? So this is something that our mechanism takes into account. So that's why going back to this slide, so that's why this stake pool creation certificate was backed up by 7 ADA is important. It's not just important the 13 ADA, but 7 ADA that started the pool from is important because I repeat this is something that the mechanism is going to be sensitive on. So if instead these let's say two persons here that control well actually the stake pool was created in the previous slide, these are just the delegates. So here's the pool that has 7 ADA and you have one guy here that has 4 ADA and let's say two others that have 1 and 2. If they instead created 7 stake pools with one ADA each, these 7 stake pools will effectively give them less than this one. So this is the mechanism that this is the way that we're going to incentivize people not to engage in civil attacks. I have to say that this is only an incentive driven mechanism, so in other words it's only going to be based on the rationality of the participants. There's not going to be a fundamental prevention of civil attacks. After all, if the system is truly decentralized, symbols might exist. What we can do and what we will do is to de-incentive us symbols, which is in the same spirit that you would find in other cases where you have a centralized system that tries to address civil attacks. For example, as in the case of DOS protection. So you create some mechanism to make it expensive for the adversary to engage into civil behavior. And this is exactly what also we will do here. Yes, more questions. Perhaps up there. In the context of exchanges, to me it sounds like they will be somehow putting an advantageous position because they would have a substantial amount of ADA stored. Yes. And also they might even be able to break the planet you drew with respect to the length of their fork of the chain. So even though the rewards are uniformly split across all both sides, one over K you mentioned, aren't they going to be able to rewrite history essentially and this way kind of channel morphons into their pockets? Okay, so the question is about exchanges and how do we deal with them in the reward mechanism? So I should tell you that this is something that we lost a lot of sleep on. I mean this is one of the, it's like a major concern in general we have. I mean, exchanges happen to be special participants in the system. They are participants who themselves are representatives of other participants. So in some sense, they are like state pools but they're not exactly state pools because they may not actually necessarily engage in running the protocol at all. I mean in principle actually, they may just not operate in this manner in any way. So one approach that we will offer to exchanges is that they do not participate in staking at all. So essentially they just give up the right to staking. So that's why you will see in the mechanism we do have these enterprise addresses. So it's going to be feasible for an exchange to say I'm an exchange, so all the funds I have, they're essentially not mine. I'm not an actual stakeholder but I represent some stakeholders. So I can opt for an enterprise address and that address will not participate in staking. But I should say, I mean this is not something that is feasible even to enforce but it's an option. So approach A for an exchange is doing an enterprise address. So whether also this is going to be a preferred mode, this is also I think to a large degree is also a community decision. I mean what would be the best way of handling this. But that is definitely an option that we have considered and it's going to be feasible in the system. Whether there is going to be at this moment these enterprise addresses do not have any additional capability, at least in the first generation of the system. But what we intend is to facilitate some advantages for enterprise addresses in the long run. So enterprise addresses may have certain advantages over other addresses in terms of how do they move funds, let's say, from one address to the other and that would potentially make them more attractive to exchanges. But this is not something that is going to be released at least at the onset. So otherwise as the situation is, exchanges will be like everyone else that has a lot of stake. So it's going to be essentially an actor that can set up a stake pool and if they want and they can even use that stake pool to advertise and engage with potential customers. And clearly because of the mechanism that there is going to sort of cut the rewards up to a certain level, let's say 1%, if you have an exchange that has 5%, it can set up five stake pools. And that's fine. There's nothing bad about this in the sense that, well, you have a 5% stake holder. So it's natural if we cut at 1%, that is going to be five different stake pools controlled by the same 5% stake holder and that's what might happen. So I envision these two possible scenarios for exchanges, either in following an enterprise address and not participating at all, or having a 5% stake and it will split it into five stake pools. But other than that, they will not have any other way of gaming the system. At least there would be nothing in the system to give them a better advantage on gaming the system. Yes, please. What about the weights? Because the random selection is actually based on a distribution of stake. So that was going to be actually my next question, why isn't the random function drawing uniform samples rather than weighted by the size of the stake? Because that's what I meant before, with potentially breaking the plan to do. The pools will have more either therefore the probability of being drawn as block proposals will be larger. Yeah, but that's what we're taking care of. So basically the whole analysis we do operates as long as no single stake holder is over 49%. Oh, I mean over 50%. But in other words, yeah, that's an issue but it's not a real issue as long as the no exchange reaches that height. So just to take the point a bit further, because the question is why does an exchange doesn't impose a security issue in our whole analysis, is that we are choosing our security parameters so carefully and so pessimistically, if you want, to address those issues. Exactly because we want to address those issues. I mean, so there is a reason the epoch is five days, for instance. I mean, this comes from certain calculations that say certain body events should not happen. I mean, well, we would have preferred that it's not five days, for example. We have preferred let's say it was one day or half a day. So the system can actually evolve faster or somehow catch up faster with its stake holder distribution that it uses as a reference point. But it doesn't exactly because to address issues like that. We would like big stakeholders, not 5%, but even up to 49% to not be able to create a security problem. Thank you. Another question? Yes, please. Yeah, or just, well, I don't know. I mean, yeah, I guess the gentleman on top. Yes, thank you. Thank you very much for the presentation. It was great. I'm interested in the reward part of it. You said you have 45 billion areas available and then 31 are allocated, right? Then you have a fixed reward per slot. A fixed reward per... It's not fixed, but it's basically per epoch. First of all, it's not per slot, so it's kind of a bigger period of time. But probably this doesn't change your question. No, but how long does that reward, that treasury last? Oh, how long it will last? Okay, so what I anticipate in what we're going to roll out as the reward scheme, as the final details are not yet completely fixed, but what I anticipate right now is something like that. So the reserves are going to fund the pool, the reward pool for every epoch. The reward pool for every epoch, as I mentioned, is going to come from the stake, from the transactions in terms of fees and from the reserve. And there's going to be some moving from the reserve every epoch. And the anticipation is that over time, let's say falling a little bit the way Bitcoin works is going to be diminishing. So it's going to be less coming from the reserves and more relying on transactions. So the exact details of this are still, we're still investigating the right function for that. And there are certainly interesting issues that are happening there with what is the exact way of doing that. But that's what I anticipate is going to happen now. So in other words, I'm not sure there's a clear answer to how long it will last. It can last for a long time, but its contribution to protocol execution will be diminishing over time. Is that the reason? That's the answer to my question. And my second question is, you said that the pass-through between the stake pool and the actual stake holder is fixed by the protocol. Wouldn't it be better if it was variable so that can each stake pool compete with each other? Yes, so there is a certain amount of room, let's say, for competition. Every stake pool declares its operational cost and declares a profit margin. So in other words, there is already a certain amount of competition. Nevertheless, at least for this first generation of how the system is going to behave in an incentive-driven, decentralized form is going to be rather restricted. So while innovation in some sense is something that we would really interested to see, at the same time there is an understanding that we would like to have a full and game theoretic analysis of how the system is going to evolve. And we would like to ensure that the system is going to roll out with the right amount of guidance so that it's going to reach a proper decentralized setting. So whereas we entertain a lot the idea of relying on potentially innovative ideas, let's say, of stake pool distribution functions that the community can come up, at the same time we would like to make sure that the system is going to stabilize to something which is decentralized. And in some sense, reaching that decentralized point is the most important task right now. And after that, there will be mechanisms that the system can evolve into something which is more free and the community can actually drive the system there. So there will be a point after all that we will not be the only ones developing the system. So once we have side chains in place and the system for creating side chains, then you can have side chains that follow very different reward structures. So the way I view this is that at least this first generation of side chains plus decentralization is going to give a guaranteed as much as possible within a certain model of rationality, decentralization profile. And after that, the system may evolve further because of innovations of the community. At least this is how I've I view this. Yes. Have you calculated how much inflation it would be during, let's say, a first stage? Inflation because of the ADA reserves. Inflation because of the ADA reserves. No, because this is something this is just one of the many considerations we have. So but certainly the minimum, let's say here, is that state pool leaders cover their costs and get something. But this is not a scheme for necessarily making state pool leaders and their members rich. It's more about making sure the system becomes decentralized from that community that supports it. So I mean, I would rather aim for less than than more with the intention of having more for later and ensuring that the system can go on in the long run and also potentially use the ADA reserves for other things. For example, the treasury system, which is an important other research stream, which I haven't covered. This is another this is another possibility for directing a certain amount of ADA reserves. So in short, the treasury system is going to enable stakeholders of the system to propose ways that the system can evolve. For example, a new side chain that does XYZ. And then the community can vote whether they would like that particular side chain to exist to be created. And there is a possibility that such that funding can actually for such, for example, new side chains could be drawn also from the reserves. So that would be another way for helping the system to evolve. So I would rather hold also for that rather than spending too much of the reserves at this initial stage. Yeah, this animal there, yes. Hi. My question is slightly broader than today's presentation. You have referenced obviously other projects like Bitcoin and Ethereum's Casper. So obviously, in your development, you've not just kind of being not looked at what else is happening outside. I'm curious, given the history of sort of Dan and Charles, and the recent sort of mainnet stuff with EOS, what is the thinking between, I suppose, internally, IHK slash Cardano, in respect to, I suppose, other projects, because I feel like with a Gauss project, which was released recently, I feel like there's an intention to try and approve that Cardano is not vaporware, which I don't know. It feels a bit like there's a sense of insecurity, because of the slower but sure approach that you've taken. So I'm just curious in terms of what the internal thinking behind that is, if that makes sense as a question. Okay, I'll try to say some words following, hopefully, in the direction of your question. So first of all, yes, of course, we are reading and studying everything that other projects do to the degree that's feasible. There's a lot of interesting and amazing stuff that happen in the wider blockchain community. We're also trying to share our things. And I think the way we share also, I should say, and I pride myself and IHK research team on that, is that we try to share what our work in a very clear, succinct manner, we present our results in the context of the wider body of knowledge and distributed systems and cryptography, we try to draw parallels and always to the best of our knowledge, give you credit when credit is due. So from my point of view, I'm working cryptography since the 90s. And I've seen like the area grown and changed over time become also more popular. And of course, I'm delighted to be able to, for example, sit here in front of you this day today, and be able to share some some of the basic research that in one way or another, where in my head for many years, and now they become important components in large scale projects that people care more and more. So I think it's a great opportunity. And it's a great opportunity that we should celebrate as a community. And when I say, that's not the Cardano community or community that follows specifically what I would say does but the wider blockchain and distributed ledgers community. So what I hope about other other projects is that they also follow elements of our approach, in the sense that they share the results in an organized manner and in a way that we can compare. And we can make this space better, more resilient to criticism, and if you want, in some sense, providing a really robust infrastructure that can be taken seriously by all the remaining, let's say, of the world that we would like to change. I mean, in some senses, too, if you want naive to say that we will be able to change so much by doing so little. I mean, Bitcoin is an amazing idea, but it's not enough. It is not enough. Cryptography in the 70s had an amazing idea. Key exchange. It's a beautiful idea. It's a powerful idea. It's something that it really came out of nowhere. It was possible using a key exchange protocol, it became possible the following completely counterintuitive thing. Right now, in this room, me and you, just the two of us, can develop a language speaking in English without having ever met before, that we will start speaking in English to each other. And then after a few sentences go back and forth, nobody in the room is going to be able to understand what we say. But we will. We, the two of us, me and you will have our own language. And we can set up this language right now, right here, without ever meeting before. That's an amazing idea. Just think about it. It sounds, first of all, impossible. When Ralph Merco in early 70s went to his professor at Berkeley and told him this idea to work on it for his semester project, he told him, his professor told him, that's not a good idea. It sounds completely out. And you should not pursue it. And he pursued it. He pursued it. He pursued this idea. And himself, and other pioneers of the time, with Diffie, Martin Helmung, have actually managed to produce a protocol that does this, this counterintuitive thing. So that was an amazing idea. But did he change the world at that time? No. Not at all. Years had to pass. Very substantial research had to be done before actually we would reach today, that you could just you know, open your browser, connect to Facebook, and don't care that your newsfeed and your postings go through tens of computers along the way. And you can be sure that these intermediate computers will not be able to listen to what you time. So, so what I'm saying, remarkable ideas are important. They are the ones that create a paradigm shift in some sense. There is going to be the world, there was the world before key exchange existed before public key cryptography and after public key cryptography. But this was not enough. A lot of people, many of them academics working with cryptography, but not only communities that felt that cryptography is important, engineers that put very substantial effort to implement that. It was a whole number of people that work together and figured it out. In the 80s, there was one protocol, one key exchange protocol produced after another, or many key exchange protocols because even though the idea was great, implementing it was not so simple. So you have to take very many different things into account. For example, like certificates. It was very complicated. So how do you authenticate the two parts? There were many, many issues and there was one protocol after another. And one of them was broken, etc., etc. Eventually, we came up with a formal model of what is a key exchange. And right now, just like few years ago, we had the first formal proof that the implementation of TLS, as the protocol is called today, is actually secure. So the implementation as we use it. So these two decades, hopefully won't take decades now. I mean, we have the tools, we have the understanding, but also a distributed ledger, a robust transaction ledger is a far more complicated object compared to a key exchange protocol. It's not just Alice and Bob that want to send two messages to each other. It's a far more complicated protocol. So we have to put invest that effort and that effort will not come from any individual or any single actor. So we'll have to have from a community as a whole. But also, it should come with a realization that we should get organized. And we should get organized and be serious in all the things we do and also in the way that we disseminate information. So it's not about, you know, going back to the question like, you know, who did it right or who did it? It's it's about like sharing ideas, organizing information and advancing all together in a scientific fashion. This wider area of distributed ledgers. Yeah, state pool creation. It was it was done up there. But maybe let's make two questions come on. Yeah, two questions. Does it lock it then the ADA, if you create it or state pool creation? Does it lock your ADA? If it's like your ADA? Well, no, it doesn't lock your ADA. Nevertheless, you should maintain, so in other words, you can still move your ADA around. Nevertheless, if the ADA, the state pool creation certificate, you can think about as a commitment that you have some ADA behind the pool. So essentially, if you sell them, like, and you don't have them anymore, then your state pool will then appear to have less ADA backing it up. And that will have an impact in its and its rewards. Yeah, the rewards will be lower. See, it will be slightly lower. Because because this goes back to the civil, the civil attack prevention, right? There's no way to distinguish between, let's say a legitimate state pool leader that sells off his ADA, and someone that engages in the civil attack creates a state pool and then another and another. So so we're gonna because we cannot distinguish between these two behaviors, we have to we have to do some restriction there. So it won't lock your ADA. But you know, if you can't keep it, then you will lose some money. And it's the gentleman over there. Last question. Yeah. Could you maybe say a few words about the ongoing research in the direction of quantum resistance? I know it's not an imminent threat necessarily, but I know that there's some stuff going on too. No, no, we're thinking about it a lot, actually. Yeah, it's not not imminent threat. But we do care about this question a lot. It's also, you know, certainly in the news, I mean, we don't consider it like an immediate threat, obviously. But it's not a threat to be taken completely lightly, especially for a system like our Daniel that we would like would like to be available for many, many years to come. So the first step towards towards this is to develop a digital signature scheme that is post quantum resilient and is suitable for our purposes. And then the second step is going to integrate that in our system in a way that the system becomes post quantum secure. So the nice, the good news about this is that at least in principle, there is nothing, there is nothing that that prevents this theoretically speaking, there's nothing that prevents us from that. So this is something that's definitely going to happen. So the only, you know, primary concern is performance. So what I anticipate is an initial release of an alternative signature scheme that your wallet can use. That's going to be the post quantum secure. And then we're going to engage in the type of security analysis that say what's going to happen if you are in this mixed situation where some accounts are postponed, some others are not. And eventually, what we'd like to show is that if the majority of stake is behind post quantum accounts, then the system is going to be post quantum secure. So a special analysis should be taken should be done there. But there's, you know, many tricky points there from the security analysis point of view, because you would like to analyze the system in the presence of a quantum adversary that might be able to engage with the stakeholders in more complicated ways, let's say, than a regular adversary. So this is something that is advancing. And we do have within ISK a research stream, exactly for that. And actually, we will be releasing more information about this in the in the near future. So so basically, that was that. And since I'm at this, like there's another, I should mention, another stream of research, which I haven't had the chance today to talk at all, which is a version of Uroboros, which is privacy preserving in the sense that, for example, Zcash is privacy preserving. And this is like another very, very interesting research direction for us, especially in the POS setting, it's much more difficult to achieve the same effect. Because there's many more things happening in the blockchain, compared to let's say, a sort of standard Bitcoin like transaction ledger. So you have like staking operation delegation, all these operations, they're sort of even more privacy problematic compared to a standard Bitcoin like ledger. And that's another research thing which I haven't mentioned, but we also put in a lot of effort these days. Okay, with this, thank you so much for your questions. They were great.