 All right, thank you very much everybody, it's really nice to be here. So my talk is going to be a bit more technical, because I'm a technical person. And I would really encourage you to ask questions as we go along. So there will be a Q&A later, but I also enjoy questions as we go along. And it's also useful for me to make sure that you understand what I'm talking about. If I'm saying things that are too simple and too easy, I can go faster. And if it doesn't make any sense, I can try again. So just stick your hand up or shout out questions as we go along. Particularly if it's something that doesn't make any sense to you. OK. So, yes, before we start, this is about, yeah, obviously is a more technical talk. This is about what we're doing in Cardano, partly about new features. But a lot of it I'm going to start off with will be about how we're building Cardano. And sort of the philosophy of how we're going about building the software. So just briefly about who I am, very briefly. I'm a computer scientist. I'm a programmer. I'm not a marketing person. No BS is my motto. I'm the head of engineering for Cardano. And I'm also a Haskell programmer. I run a Haskell consulting company. I have a PhD in computer science. I've got 10 years experience in Haskell consulting, 10 years, and nearly 20 years in Haskell itself. So I've been doing this stuff for a long time, but not cryptocurrencies actually. Cryptocurrencies are new to me in the last year and a half. So I said it's going to be about what we're building and how we're building it. But actually I'll start with how we're building Cardano. And then later I'll talk about some of the new features that are in development at the moment. So why should we care about software quality? I mean it's kind of an obvious question and it seems silly to have to ask it. But most software is really bad. And we all know that because we use software and we know it's really bad. So why should we care about software quality when it comes to cryptocurrencies? I mean the answer is really obvious. If you all believe that cryptocurrencies are for real, then you ought to want those things not to fail. And if you build cryptocurrencies in this way that we build ordinary software, then they will fail and people will lose billions of dollars or Bitcoin or whatever. So failure is very expensive in this case because it's money. And at least if you all believe it's money, then you should believe that it's worth making these systems work. And industry standard means bad. Industry standard practice in the software industry is terrible. So we need to aim much higher than industry standard to get something that you should have any confidence in. So my question is if someone else is building a cryptocurrency system, why should I trust my money with your system? Why should I trust your system with my money rather? Show me the evidence. I don't just want marketing promises and I want evidence like science, like mathematics, I want proper evidence. That's what I really want because I am very risk averse. So maybe that makes me a very bad cryptocurrency investor but probably it makes me a good person to be helping build a cryptocurrency, being risk averse. I worry. So my question to people building these kinds of systems is show me the evidence. And the opportunity for failure is everywhere. There are so many different ways that systems like this can fail. And so I would like evidence that your system is not going to fail in this way, this way, this way, this way, this way. So let's just go through some of them. You can have a flawed protocol design in the first place. That means the underlying cryptography of how the thing works is flawed in some way. It's very easy to get that wrong. An incorrect implementation of a design you've already written down, that's also very easy to do badly. There's a huge area encompassed by incorrectly implementing the design. And that's actually mostly what I work on day-to-day is how do we make sure that we correctly implement our designs? So that's a huge one all on its own. Then you've just got the typical software mistakes that you see all the time that is the reason why apps and your operating system and etc. has updates all the time, security updates. Why are there security updates? It's because people built the system wrong in the first place. And in a cryptocurrency, any one of those could have been fatal to the system. Amateur cryptography. Amateur cryptography is like amateur brain surgery. It's a really bad idea. People think that they can do it and then you easily find... Well, not easily, but someone who wants to steal all your money finds a flaw in the cryptography and then that's it. That happens more than you would imagine. You can fail by missing performance deadlines. Cryptocurrencies involve things happening on a schedule. Like in Bitcoin, the schedule is every 10 minutes, so maybe that's easy. But in other systems, Ethereum, Cardano, there are tight deadlines. Things have to happen within 10 seconds or 20 seconds. And if it doesn't, things start to go wrong. Systems can collapse under load. The system seems to work fine when you run it with just a few users and falls apart when you run it with lots of users. Failure to scale, that is like... It works fine when there's only a few nodes running the system, but when the nodes are far apart and there's lots of nodes or there's a high transaction rate, there's all kinds of dimensions of scaling. But systems can fail to scale and then be not useful because they didn't manage the scale to the size that you wanted. Denial of service failures, people trying to attack your system. There are economic attacks, there are social problems. Like Bitcoin doesn't have any way of upgrading Bitcoin except through a... No formal mechanism. I mean, there's a sort of social process and that maybe seems to work, but it also sometimes gets jammed up. Plausibly, Bitcoin could fail because people are not able to agree to make changes and move forward. That's a failure of some kind of social voting system. There's a lack of a voting system in Bitcoin. So maybe you need one. So systems can fail due to that. Systems can fail due to macroeconomics. So there's all kinds of things, software-related, non-software-related, that can cause a system to fail. I would like evidence that the system that you're building or rather the system I'm building is not going to fail. And that's quite hard. Yeah, hubris. So cryptocurrencies are new things and they do rely on a lot of new ideas and they rely on new cryptography, particularly proof-of-stake protocols rely on new cryptography. And everything after Bitcoin relies on new cryptography. But it's kind of easy for people who have been building those kinds of systems to think that the reality of rules of how the world works doesn't really apply to them anymore. You all know the story of Icarus, who flew too close to the sun and eventually the wax melted and his feathers started to fail and he fell to earth. Because part of the story, and there's many reasons, but part of the story is that he didn't really believe that the rules applied to him anymore. And he got too close to the sun, got too hot, but that did eventually melt the wax and he fell to earth. And there's a little bit of a danger in that in the way that some of the cryptocurrencies have been built. I'm sort of particularly thinking of Ethereum, actually. So there is a danger of ignorance and hubris. It's dangerous to believe that it's all new and that we don't need any of the old ideas. And actually we need lots and lots and lots of the old ideas, as well as the new ideas, right? You need all of it. So it's a mistake to believe that you can build a high-quality system by being an expert only in cryptocurrencies. You need actually a large amount of, well, experts in different things. So you can't be an expert in all of these things. I'm an expert in maybe two of these things. So to build a cryptocurrency, you need cryptography, computer science, formal methods, programming languages, software engineering, system design, blah, blah, blah. It goes on and on and on. And all of these things are actually necessary, I think, or probably quite important. I mean, the ones at the top are clearly critical. The ones towards the bottom are probably rather important. And no one can be an expert in all of these things. So you need to have a team that has expertise, I think, in all of these things to be able to build a system, to be able to build a modern cryptocurrency that is going to really work, that is going to be able to solve, that is not going to be able to fail in all the ways that I described earlier. You need economists to make sure that the system doesn't fail due to economics. You need game theorists to make sure that the microeconomic incentives work out. You need people like me to make sure that the programming languages are any good. Things like that. And if you look at things like Ethereum, I don't want to bash Ethereum too much, but it's clear that it was built very quickly and it was great. But I'm an expert in programming language technology and so I look at Ethereum and I see that the people who designed the EVM and the languages that run on the EVM had clearly never studied programming language technology. They'd never studied... There's a whole academic discipline of how do you design programming languages and Solidity doesn't take advantage of any of that at all. And there are consequences to that. So my point is that there's lots and lots of existing areas of study, of knowledge, that these systems, cryptocurrencies, have to make use of to really be great, to really work in all the aspects. So the philosophy of development of Cardano is to try and use the best available academic knowledge and skills and to rely on expertise from all these different domains. Where necessary do original research, but in many cases we can simply pick up existing knowledge that is known within those particular disciplines, known within game theory or microeconomics or formal methods. And then of course, because we have to deliver things to market, we have to get features out to users, you have to pick an appropriate trade-off between quality and delivering new features, getting things out. There is a trade-off, although it's not quite where one might imagine. So okay, I'm just gonna do a very brief history of Cardano. So the first code was written in the summer in 2016. The mainnet release was only one year after that, which was quite quick, really. I mean, the users were complaining because it wasn't quick enough. But from my point of view as a software engineer, as a programmer, that was quite quick. I got involved about halfway through that process. In March this year, we started doing rolling releases, and then we've done incremental, these are the major releases, there's also been intermediate releases, smaller releases. So that's kind of a, you know, Cardano is still relatively young in that sense. So another way of charting the history of Cardano is to look at the exchanges that use, that support Cardano. So when Cardano was first launched, when the mainnet was first launched, there was just one exchange, there was a second a few months later, and at the moment there's at least a dozen sort of tier one exchanges that have significant volume, and there are more of it are integrating all the time. In version 1.2, which was out, as I said, not so long ago, when was that? June, some new improvements to help exchanges. And coming along shortly in the 1.4 release is a completely new wallet implementation that will support exchanges, the scale at which exchanges have to operate. I'll come back to that point in a minute. So lessons that we have learned from launching the mainnet. So some things have worked really well. I'll get onto the what hasn't worked well in a minute. So the core system has been stable. It's been running 24 seven, it is globally distributed to survive failures in particular data centers, and that has actually been important. We have observed with our monitoring system that some of the Amazon machines have lost connectivity with each other, or have been rebooted, failed for whatever reason. There have been outages beyond our control, but the system was completely stable throughout that because it was designed to be globally distributed and have proper fall over or fail over. We've had good system monitoring, so we've been able to see that it works. 95% of transactions make it into the next block, so that's the sort of measure of how quickly transactions make it into the system. And blocks, in our case in Cardano, are 20 seconds or slots are 20 seconds. And we can achieve quite good throughput for exchanges by using multi-output transactions, which is the same as what Bitcoin does, but it's good to see that it does actually happen. So, those are the good things. So, by and large, that's been quite satisfying. But there's definitely things that are not as good as you would hope. And as an engineer, I tend to focus on the negative things. I focus on what do I need to fix, not like what's great. What's great is like that's yesterday. What I have to fix is what's in front of me. So, there's some obvious lessons that we could draw from the main launch. I mean, it was, as I said, developed quite quickly in the way that a lot of software is. And so, the performance requirements were not clearly understood at the beginning. They needed to be better understood at the beginning. Performance engineering can't be left to the end. It has to be done much earlier. I mean, this hasn't been a problem too much. The scale at which Cardano is currently operating, we have about a factor of 50 headroom. But I can see that we could do a lot better. And there's things that could have been done better. And distributed concurrency networking are really hard. A lot of presentations on cryptocurrencies are like, this is easy, this is awesome. But actually, the engineering, the software, writing the software, making it work correctly is very difficult. Distributed concurrency is a hard problem in computer science. So, a corollary of that is that hard problems require more formality. And I'll say what I mean by that. More formal approaches in the development of software. So, as an example, every system needs a wallet. And Cardano has a wallet. The wallet that we first deployed with proved to be good enough for desktop users, but not good enough for exchanges. We had to do a lot of remedial work to make it just about acceptable for exchanges in the early months after the mainnet release. And it became obvious that it would be necessary to rewrite the wallet from scratch. And I'll talk about that in a bit more detail, because it gave us a good opportunity to do things better. And I'll show you how we've done that. Question? Can you talk about the numbers? What numbers are good enough for exchanges? Yeah. Okay, why was the wallet not good enough for exchanges? I guess that's really your question. Yes, what are the exact numbers? How many transactions a second? How many outputs a second to the new? At certain times, embarrassingly, exchanges were only managed to get like three transactions per minute, which doesn't seem very good at all. Okay, wait, let me get this straight. So we're suggesting that... That's not the whole system. That's why it's changing. Three transactions a minute is okay for a desktop user. Pretty much, yeah. Well, also remember that the desktop users do not have large wallets. So exchanges have very large wallets, and the performance for very large wallets was worse, much worse, much, much worse than for ordinary users. So an ordinary user with like 10 entries in their UTXO would have no problem doing multiple transactions per minute. But the way that the wallet was written, it had those problems. It did not scale well at all. So the core system worked very stably, but the wallet was actually not very good. So what are the ways it was bad? It was, yeah, basically it didn't scale. That was the main problem. The asymptotic complexity was poor. The management of data concurrently was poor. It was written too quickly. I mean, I could go on and on about what was wrong with it. Okay, so you say 1.4 solves that. So how many transactions a second, how many outputs a second can you do now? 1.4 is not released yet, and the new wallet is still sort of about 90% done. So I can't give you those numbers yet, but I anticipate that it'll be much better. And I can tell you why, right? I have evidence, which I'll come to in a second, right? Because I will talk a little more detail about the wallet. So yeah, the wallet was not very good for exchanges. It was okay for desktop users. And so we decided to rewrite it from scratch and to take that opportunity to do things properly, at least properly in the way I see it, the way I would like to go about making software. So the way that we did this was by starting by making a semi-formal or formal specification. What is it that the wallet really has to do? So I started writing a precise specification of what a wallet is, and that's written in a mathematical style, mathematical notation, mathematical logic, just set theory in fact. And just the act of doing that, that kind of design process forces you to think clearly about what it is you're doing and to simplify as much as possible. One of the problems in original wallet design was that it had accumulated complexity that was not essential. And if you talk to software developers, they will tell you that accidental complexity is like the worst thing. You need to make things as simple as possible. But making things simple as possible is actually quite hard. It requires a lot of thinking. So writing down these things in a mathematical style forces you to think about this stuff and it highlights the tricky parts of the problem, which might not have been immediately obvious previously that they were even problems at all. I'll give an example in a second. And then you could try and prove things. And we have done that in our wallet specification. I'll come back to what this means in a second. So the point about writing down a specification in this mathematical style is, yes, it makes you think clearly, you can't get away with fuzzy thinking. It forces you to be precise and to think clearly and logically and to write it down. And then you can use that. You can try and prove things about it. But in fact, we don't actually prove things. We state properties and then we test them. This is what makes it semi-formal compared to what a mathematician would do. A mathematician would prove things about a presentation like this. Whereas, because we're doing this in what I call a semi-formal style, this is this trade-off between how quickly you can implement something and how good it's going to be. The golden standard would be proof, but it's a lot quicker to state the properties and then test them. And that's the approach that we've taken. And taking this approach, in my experience, leads to dramatically simpler programs and more robust programs. The exercise we went through eliminated huge areas of accidental complexity in the original specification. So this is one page out of the 40-page formal specification of the wallet. But I don't expect you to see it all. It's slightly too small. But the point is, this is more or less the entire specification of what it is to be a cryptocurrency wallet in one page. And that is actually quite an achievement to get something down to being that small and that simple and to being that mathematically precise. And that's quite unusual in the way that software is built. But in my book, that is the way to do it. Go on, another question. Sorry, I'll repeat the question as well. I'm wondering, you seem to have one balance. There's two balances, actually. There's an available balance and a total balance. And in the end, we end up with three balances. Okay, but how about things like HD wallets? Yeah, this describes HD wallets as well. Okay, so... In fact, this is agnostic to whether it's an HD wallet or not. Okay, but this describes just one address of the HD wallet. No, no, it describes the entire wallet. So it's a UTXO-based wallet. So it describes a set of addresses. Yeah, so it is definitely a UTXO-based. So it can be a single address, would be an instance of this, but an HD wallet would be another instance of it. Okay, then what seems very... So this is published online, if you would like to go and read it. Okay, thanks. It's our formal specification. And if you are a mathematician or you've done a bit of mathematics at sort of undergraduate level, you could look at this and you could see what it means. It's not actually all that complicated. It's just set theory, finite relations, functions, sets. It's not very complicated. Is this formal specification online too? Yes, yes, this document's online. Yeah, it's on the Cardano docs website. And this is the specification for our new wallet. Now, okay, let me get to the next. So having a specification is nice, but you also need to then have an implementation and then you have to have some evidence that your implementation and your specification do the same thing, that one is an implementation of the other. So here's a commuting diagram, hands up the three people in the audience who know what that means. One, two, three. Oh, that was a good guess. Oh, four or five. Okay, fine, good. For the rest of the video, I will explain what it means. But the takeaway from this is that this approach to writing a specification and then testing against that specification gives you an order of magnitude, more confidence in the design that you've done. Partly it's because you started with a simple design you were forced to by having to write it down in your mathematical notation. And secondly, by testing it against the specification in this fairly comprehensive way. We can be quite sure, not 100%, testing never gives you more than, never gives you 100%. It's not as good as proof, but testing gives you, in this style, gives you quite good evidence that your implementation does what your specification says it should do. So let me just explain how this works. So you have a specification that came from the paper. So that was this stuff. Where is it gone? And this is described in mathematics, in set theory. But the way that we wrote this was deliberately so that it could be very easily converted into a program. In particular, it's very easy to convert this into a functional language program, in particular Haskell, because Haskell is a very mathematical style of language. I mean, a lot of these things are basically functions. And so it's very easy to translate that into Haskell, into a functional language. And what does that give us? That gives us an executable specification, a specification that we can run as a program, because it is in fact a Haskell program. These things translate into Haskell functions. And so then that gives us a version of what we're trying to do that is abstract, it is simpler. Abstraction is all about getting rid of the details that don't matter, and focusing on the details that are essential. So the wallet specification doesn't include anything about cryptography. It doesn't include anything about HD wallets. It only has the idea that you have a set of addresses, that how that set works, it doesn't care about. It focuses on the, what I thought was the hardest things, which was based on my opinion, having looked at the old wallet and seen what did it do badly, trying to focus on what were the hard problems. So the specification glosses over particular details deliberately, so it's less complex than the real implementation. So how do we relate the real implementation, which has all of the inherent complexities, it has to do all the cryptography, et cetera, et cetera? How do we relate that to the specification? Well, the answer is by an abstraction function. The abstraction function goes from the complex to the simple, it strips out the details and gives you back the same value but in the simpler representation. So what do I mean by that concretely? So for example, you would have for the wallet, you would have the state of the wallet, which in the real implementation might be a database or a database plus some additional information in memory and some files and whatever. So that is the state of the wallet in the real implementation. The abstraction function translates that into the simple set theory style description of the wallet which was just in terms of a, some set theoretic description of a UTXO. It turns out it's a pending set and a UTXO, that's all you need. So the abstraction function goes from one to the other and it gives you that simpler version. So now the idea is that we can start with the state of a wallet down in this corner and we can run certain operations on the wallet, on the real implementation of the wallet and get to a new state of the wallet over here. So what might those operations be? They might be things like new blocks arriving or new transactions being created. Those are the kinds of things that the real implementation will do and get you from one state of the wallet to a new state of the wallet. In each state of the wallet, you can apply the abstraction function to get a corresponding state in the abstract version that is described as part of the specification. And then you run the equivalent functions in the abstract version. So for example, it may be slightly hard to see here, but apply block is one of the operations on the wallet that it changes the state of the wallet when a new block arrives, right? You notice that your balance might have increased because someone paid you something for example. Or new pending is another transition. So the operations down here have corresponding things in the specification, apply block, new pending. And so then the idea is that you do one here and you apply the abstraction function, you get to another value at the top or you start at the beginning and you say apply the abstraction function first and apply the operations at the level of the abstraction and you should get the same answer. And if you do, this is a correct implementation of that. If that is true for all possible operations in states of the wallet. And then what you do is you generate random sequences of operations on the wallet starting from an empty wallet and that gives you really quite good coverage of this. In principle, you'd have to prove this to be true for every single possible sequence of actions. And we don't do that, but we test with randomly generated large numbers of randomly generated sequences of operations. Does that make any sense? Someone said yes at the back. So this gives you really quite good evidence that your implementation does what the specification says it does. And if your specification is similar to that. Sorry, there is a question here. Go on, question. Yeah, I'm just wondering. So it seems that you can just screw up your abstraction function and get the whole thing wrong in the sense. So if you don't do that, you can screw up your real implementation and have problems, but if you do use this commutative diagram and you will screw up your abstraction function when you get into the same problems. So it's easier to write the abstraction function or how do you know it's the right one? How do you know that you've done the abstraction function right? You can also write the opposite one and check that the two of them are proper inverses of each other. And it's true. You can make mistakes in the abstraction function, but it's quite likely that if you do, you'll catch it when you do the later simulation that something will not match up properly. But it's true. It is technically possible to get that wrong. This is why this is not a completely formal, you know, proof method. So how does it come down to this? Just a second, I will just... Sorry, I can repeat the question as well for everybody and for the recording. So of course, there's a model-driven development which has opposite directions of arrows, so you can generate code and... This is not about generating code. This is about values. Yeah, this is the opposite. This is the kind of opposite, because... I believe at one point there was a decision to produce the protocol in Haskell, among many other possibilities of how to guarantee high-quality code, because this is, of course, the tutorial solutions there. And my question would be how does the choice of the programming language actually affect how you deal with performance, the problems with performance? With performance? Yeah. Coming back to the other side, actually. Yeah. With performance, not so much. The language doesn't matter quite so much. Because performance is really about analysing what you're doing. So, for example, we can actually think about performance at the level of the specification already. One of the next chapters here is... Well, actually, there we are. This is the asymptotic complexity of all of the operations in the basic version of the specification. And then, in the next version, we change it to where it's gone. Well, we notice that it's not great, and then we do some refinements and then get something where the asymptotic complexity is better. For this problem, the absolute performance doesn't matter too much. What's critical is the asymptotic complexity. So, the original wallet, you know, it was accidentally, you know, had terrible performance, not because it was written in Haskell. It had terrible performance because they were doing things which were linear, which could have been logarithmic. So, you know, the computer science, first-year computer science, where you study asymptotic complexity, that's actually really important. And you have to do that kind of analysis and then discover that the asymptotic complexity is not what you wanted, and then fiddle around with it until it is. The asymptotic complexity is now it's N log N and like, that's great. The exchanges are not going to have problems when you're doing something that's N log N in the size of their wallet, whereas they are if it's quadratic. Yes, of course, you need to handle asymptotic complexity first, but this is about how you pose a problem, actually, which I believe this specification very much helps to reduce the asymptotic complexity. But if you want to speed up something 10 times the constant of complexity is important and that's actually the issue, I guess, which you are solving. Ultimately, you have to have good constants too. But number one is get the asymptotics right. Because if you get that wrong, nothing will save you. And that was the problem with the performance of the old wallet. I am really not too worried at all about the performance of the new wallet because I know that all of the things are, you know, the asymptotic complexity is sensible and the constants are going to be fine because it's just using, you know, Haskell's finite map library and that's pretty good. It's not doing anything that's particularly unusual there. All right, thank you. Any other questions on all this formal nonsense? All right. So a closely related issue is that you can't just go and, like, with the wallet, we decided it was actually worth rewriting from scratch. But you can't do that with everything. I mean, the system is a lot bigger than just the wallet. So, you know, we have to balance, you know, if I had my way, I would say like, all right, let's not do any new features and let's just rewrite everything, right? But that's not okay, you know, from the point of view of users and, you know, it has to be a balance. So, how do we manage this trade-off between trying to do things in a way that I would say, you know, is a good way that produces evidence of high quality, et cetera, et cetera, and get new features out to users at the same time? And, well, it's tricky to strike that balance. What we are doing at the moment is as part of delivering the decentralization and particularly the delegation features in the system, we are taking that as an opportunity to introduce, you know, more formality and more executable specifications, much like what we've done from the wallet, but doing it for, you know, existing code that we are, you know, in the process of changing. So, we're trying to do sort of both things at the same time. And that's tricky, but, you know, it lets us balance that trade-off of delivering new features while at the same time trying to improve the quality to, you know, much better than industry standard. Right. Okay. I don't know. How am I doing for time? Just go on. All right. Tell me when to stop, because, I mean, there's features I could talk about forever and ever. There's quite a lot of stuff. So, the biggest thing, I guess, that we're working on at the moment is the thing that everyone's most keen on, seeing done and out and released, is decentralization. So, as I'm sure anyone who's, you know, looked at Cardano before will know that Cardano, in its first release, is federated, but not decentralized. Federated meaning that, you know, that it's operated by IHK, Cardano Foundation, and DiMergo. And it's not, you know, a fully decentralized system yet, but it will be. So, decentralization for Cardano involves mechanisms to delegate stake. It involves incentives for people to operate stake pools. Stake pools are by analogy with mining pools. They're not the same, but that's kind of the analogy. There has to be incentives to delegate to stake pools. I'll get on to what delegation means if you've not come across it before. And there also has to be, for property centralization, decentralization of the network. So, a proper, robust, worldwide, peer-to-peer network layer. And those are all the things that we are working on at the moment. And I'll go into a bit more detail on each of them. So, for delegation and incentives. So, Auroboros has always been designed as a decentralized, you know, blockchain protocol. Yeah, strictly speaking, Auroboros is not a cryptocurrency, but a blockchain. So, although it has always been designed as being decentralized, to do that in practice means you have to do quite a lot of other things. I mean, the original Auroboros paper did describe delegation. But, yeah, question here? So, if we're interrupting you because you just explained a scheme, all the blockchain of yours is a patent, right? Sorry, say that again? The model of your blockchain operates, right? I'm trying to realize, you know, it is public or private. It's public, right? Or at least, once it's decentralized, then it's properly public. But in this case, if you have a federation, right? Currently, yeah. It is permissioned or permissionless. So, technically, currently, you would say that it would be permissioned because it's federated. But the design of Auroboros is to be a public blockchain. And the decentralization is about delivering that for real. So, basically, for the end-user, right? Join a blockchain of yours? For end-users, it's completely permissionless. It's for people who are creating blocks, but it's currently permissioned, if you like. There's, like, three organizations. What, in the case of an enterprise entity who wants to join a project, right? It is still to be permissioned by you, by the foundation, right? To join all the blockchain network, right? No, no. Anyone can join the network. But at the moment, only three entities can create blocks. And with decentralization, then anyone can take part in that. And that makes it then fully decentralized. But if you want to integrate with Cardano, there's no permission required for that. Thank you very much. It's clear. So, yeah, so, as I said, Auroboros, the underlying blockchain technology protocol that was designed by the academics, researchers, who work for IOHK, has always been designed to be decentralized. But actually doing it, in practice, turns out to be harder than it looks. So, we've ended up having to write a new research paper on the mechanisms to do delegation, beyond what was in the original Auroboros paper. Importantly, a paper on the incentive design. How do you make sure that the incentives work out properly? And I'll talk about that more in a second. And then an engineering design for both of these things. An engineering design that covers how to do delegation and incentives based on what the researchers have worked out. And this turned out to take a long time. It proved to be quite tricky, quite subtle. And it required lots of iterations going round and round. Because there's some quite difficult trade-offs. There's no one clear solution that just satisfies all the requirements. There are a lot of requirements for decentralization and incentives that kind of pull against each other. So, delegation. Let me just summarize how this is going to work once it's released. So, this is, you know, so Cardano is based on a UTXO model, much like Bitcoin. So, addresses in this context are the same style of addresses as in Bitcoin, which is a bit different from accounts in Ethereum. So, addresses in Cardano are associated with the stake keys. So, every, you know, your money goes along with your stake. In Cardano and in proof of stake protocols, the idea is if you own, you know, value, you own a corresponding stake. And the stake gives you the right to take part in the proof of stake protocol. So, in the delegation system, addresses are associated with the stake keys and those stake keys get registered on the chain. The stake keys have a corresponding reward account, and that's what's used to pay rewards. And that itself turned out to be really quite tricky to get that to work right. To get the asymptotic complexity of paying out rewards turned out to be much harder than you would imagine. I can tell you about it if you care. Stake keys then delegate to a stake pool. And it is only the stake pools that take part in the proof of stake protocol. But if you want to do self-staking, run your own pool, then that's what you do. You just run your own pool. And that pool can be private. So, if you really want to, you can run, you can take part in the proof of stake protocol, you know, on your home machine. At least provided you've got a good enough network connection. And then these staking rewards get paid out into these reward accounts. And that happens completely automatically. It does not depend on whether the stake pool that you've delegated to is cooperative or not. The system does it fully automatically. And it pays out rewards to the people who run the stake pools and it pays out rewards to people who join the stake pools. So, that's the intuition behind there being an incentive to run a stake pool or an incentive to join a stake pool. Why should I bother delegating? Because you'll get money. Right, that's why. So that's the basics of how delegation itself will work. So obviously, you know, it'll be possible to run a stake pool and then just through Daedalus or whatever, it'll be possible to delegate to your choice of stake pool or to your own stake pool. The incentives, incentives are really tricky. A good incentive scheme requires a lot of analysis and a lot of careful design with expertise that I don't have. You require expertise in mechanism design or microeconomics or game theory to design one of these incentive schemes and be confident that it will do what you want it to do. It's easy to design an incentive scheme but, you know, so-and-so gets paid so much money when something happens. But trying to show that that incentive scheme has the outcomes that you want, that is hard. That's where you require game theory to say, you know, the Nash equilibrium or the stable equilibrium of this game are the kind of outcomes that we want and not the kind of outcomes we don't want. So, I won't go through exactly how that incentive scheme works, but I'll give you the goals and a sort of very quick intuition to ask me afterwards if you've got lots of questions about it. So, the design goals are to incentivise people to operate stake pools and for people to join stake pools, right, because both are necessary to make the system work. In case it wasn't clear to people who have not come across Proof of Stake before, Proof of Stake, in its simplest incarnation, relies on everybody taking part, and that's not really practical because if I'm running, you know, if I'm running a naive Proof of Stake protocol on my phone, I can't in practice take part in Proof of Stake all the time because, you know, it's not going to be on or not going to be connected to the network. So, the whole idea with Proof of Stake is that you, to make it work practically, you have to be able to delegate your rights to take part in the protocol to somebody else, someone who is online, someone who is operating a server. So, there have to be incentives for people to delegate because if nobody delegates, the system will fall apart. There will be no one to create blocks. So, it's critical that people delegate, just as critical as that there are people to delegate to. It's the combination of those two that has to happen. There needs to be a reasonable number of stake pools, and that means not too many, not too few, particularly not too few, right? If you look at Bitcoin or Ethereum or several of the other systems that are out there, you'll see that actually they're remarkably centralized for decentralized cryptocurrency. Bitcoin has five major mining pools, which control, you probably know better than I do, some vast proportion of the hashing power in the system. Ethereum, I don't know the numbers for Ethereum, but as I understand it, most of these systems have actually quite a small number of nodes that are actually really creating most of the blocks. And the equivalent in our system is these stake pools. So, you don't want a centralization collapse where everybody joins the same stake pool and you end up with only five, like in Bitcoin, or worse, only one, right? That would be a disaster. So, the design of the incentive mechanism tries to make sure that the number of stake pools does not collapse, that there is not an incentive for stake pools to merge. There should be an incentive for them to merge sometimes, but not so many, not merging so much, but the number of stake pools collapses. So, in fact, the incentive mechanism has a goal parameter that says how many stake pools there should be. And that parameter is adjustable. We get to pick that parameter, or we as the system, people voting on how the system works, get to set that parameter. And the proofs and the simulations show that the Nash equilibrium turns out to be that number of stake pools. So, that's like a good result. The system, when it's stable, when it reaches an equilibrium, will have a decent number of stake pools. So, there is not an incentive for them all to just merge. And so, yeah, we have a research paper that explains this with proofs that show Nash equilibrium, et cetera, et cetera, and we also have simulations which sometimes can try out scenarios that are too hard to prove. And the general idea is it's based on competition between stake pools, competition for people to join the stake pool. So, different stake pools are trying to, like, give you the best returns, and that's how they compete with each other. But the way that stake pools themselves get, the rewards that the stake pools themselves get is partly based on their size, and that's what lets us stop stake pools getting too large. And, yeah, tell me when I should stop, because I can go on and on. I mean, every day, so... Go on, the question right here. All right, so, how do you prevent an actor from actually no civil attacking the stake pools? That's an excellent question. How do you prevent civil attacks? Just do two pools and ask people to join pool A&P. Yes, yes. So, the idea of a civil attack is that, you know, I could create multiple identities because on the internet, nobody knows that you're a dog, and I could go and set up, you know, 100 different stake pools, and thereby try and trick everybody into joining my 100 stake pools. And then, in fact, it's only me who controls all the stake, and then, I don't know, profit or something. I don't know what the point of that would be, but, you know... The idea of a civil attack is multiple identities hiding behind, like, virtual identities. And how do you stop that? The general solution to these kinds of civil attacks is that there has to be some kind of scarce resource that's used up. So, you know, like in proof of work, the scarce resource is like computational power that can't be duplicated. So, for... Let me go back a slide. So, for stake pools, the scarce resource that we use is stake, right? So, we say that when you register a stake pool, there is a slight difference in the rewards that a stake pool will get depending on how much the stake pool owner contributes to that stake pool. So, that means that, you know, if you get a bunch of people to get together to form a coalition to make a stake pool, they can probably get the amount of stake that they would need to get an optimal return. And so, the point is that stake pools that have had their owners, which can be multiple people, pledge a large amount of stake to their stake pool, they can get a slightly better return for them and for all of their members. And that means that they can become a successful, a competitive stake pool. Whereas, if I start a stake pool and I don't give any stake to that stake pool, I will not be able to make a competitive stake pool in practice. I mean, I can do it, I can try, but it would always be better for someone to join one of the different stake pools that had a better return. And it only has to be a slight difference for people to switch. And then the point is that I can't put my same stake to create multiple different stake pools. You know, if I pledge a certain amount of stake to this stake pool, the system doesn't let me, you know, duplicate it and put the same stake into a different stake pool. And so, that's the scarce resource that is too much in the way of civil attacks. Does that answer your question? Yes, thank you. So that's all part of the game theory. That's all the stuff you have to think about with getting this right. Yeah, question in the back here. Yeah, so I don't know much about both of stake mechanisms, but in proof of work things can get screwed over. I mean, so Game Theory is nice in mathematics, but I took some Game Theory courses and eventually it's a model. And in the beginning of the course they tell you that the model does not always reflect what happens in reality, which means that eventually it may happen that one big pool takes over. In proof of work there is a recovery mechanism. You have to control the scarce resource for a long time, and if you stop doing that or the good actors gain more resources, they can take over the good chain. In proof of stake, I'm not sure what happens if actually some bad actors takes over. What's the recovery mechanism if any exists? The basic idea with how much adversarial power is required to take over the network is very similar between proof of stake and proof of work. So in proof of work you need to control 50% or over 50% of the hashing power in the system to take it over. And if there was one mining pool that controlled 50% of the hashing power, that would be it. They would control the whole system. Yes, they have to control that hashing power over some number of blocks. It's not very different here. So here the system would collapse if an adversary controlled 50% of the stake, which is equivalent to the 50% of the hashing power. And the proof of that is in the academic papers. So we need to make sure that our incentive mechanism does not accidentally encourage a situation where one stake pool controls 50% of the stake. And that's exactly what the mechanism tries to do. And the proofs and the simulations show that it's an unstable equilibrium to have a very large stake pool. That stake pool will actually split and people will leave that large stake pool because they'll get better returns in smaller stake pools. So that's why one should imagine that you wouldn't get this collapsing into just a small number of stake pools scenario. Does that answer the question? I was asking you about what happens in the recovery. How do you recover? How do you? So I understand that building mechanism can't happen, but it can't happen. So how would you recover if this did happen? That's a good question and I don't know. That's outside of the assumptions that they make in the paper. I would go and ask the researchers exactly that question. What happens when these assumptions fail? And how likely are they to fail? Those are very sensible questions to ask about a system like this. Another question right here? Yeah, go on. I feel like a very hot debate is preventing basically double staking or with proof of work you actually burn the work and with stake when you get chosen, when you get the chance to actually mine the block or stake the block, you can stake two different versions of it and then try to use it to double spend or something like that. Can you talk about how Cardano prevents this? So this is how Auroboros works. So yes, in principle, someone who is the current slot leader could create two different blocks and send them out to different people. And so for a short time, there'll be a fork. Some people will see one block and some people see another block. But just like in Bitcoin, after a while, after a certain number of blocks deep, that becomes extraordinarily unlikely and the proofs of that are in the paper. So in exactly the same way that in Bitcoin, it's very unlikely to have, the people are highly likely to agree on blocks that are six blocks deep or 10 blocks deep or 20 blocks deep. The same is true in the proof of stake in Auroboros and the proofs of that are in the peer reviewed literature. Sure, but the problem is... You want an intuition as to why? I can just tell you it's been proven, but you sort of want to know, give me an intuition, why is that the case? Is that what you mean? I haven't read the proofs, so obviously I cannot know what the proofs actually prove. But I can tell you what the proofs prove. I mean, how the proofs work is very difficult. But the statement of the proofs... The theorem is actually very simple. So there are two theorems and they're exactly the same ones as the people have proved for Bitcoin. So the first theorem is persistence. So once a block is somewhat deep in the chain, it is stable to a very, very, very high degree of probability. And the degree of probability depends on how deep it is in the chain. And that's the same as in Bitcoin. And the other property is liveness, that so long as 50% of hashing power or stake is honest, obeys the protocol correctly, then it will always be possible to make progress, to add new transactions, to create new blocks, to incorporate new transactions. So those are the same properties that Bitcoin and Auroboros have. Well, you know, the whole idea is not for a fork in the blockchain to let accelerate the main branch which everybody follows. That's how a double-spent could be performed, right? And proof of work, there's a cost for getting this fork on the side done. Here you can theoretically just prepare multiple versions, then your party can get the leader on one of those versions and it costs nothing. You can only do that if you control a very, very high percentage of the stake, near or over 50% of the stake. To get those double-spends deep in the chain, similarly deep as in Bitcoin, you would need far more stake than you can actually achieve. So it has the same persistence property as Bitcoin. All right, let's continue that later. Yeah, sure, of course. Thanks for the answer. I can certainly wrap up right now. And then if anyone else has any more questions, then we can talk over a beer. I've got lots of supplementary material. But it doesn't matter, these are all optional topics. I'm happy to be questioned about these things later. I'll tell you how fast the internet is. It's not very fast. All right, let's wrap up there. Thank you very much.