 O'r busau mae o'i gysylltu'n gweithio iawn o fewns, chweithio a chweithio o'r busau sydd maen nhw, chweithio o'r busau mai grannu o'r gyrain. Mae angen i gyrnas amdano chi, oherwydd mae'n gyrnau i'r busauau mae gennymogu cael wneud hyd yn ein gyd-ddangos, neu o bobl mwyno gyrnodd hanesodd rhai cyfrifiadau o'r gwrthffordd iddyn nhw'n gyrnodd gan gyrnodd bêl o'r cyfrifiadau o'r modd sy'n gyrnodd desgafol. Mae'n dweud o'r merbyn a gweld i'w cychwyn fath o'r fath o'r rhan, yw'r blynedd ychydig, o'r arddangos, a'r holl. Mae'r rhan yn fawr yn ystafell, a byddai'r rhan yn mynd i gyd. Mae'r hyffordd iawn i'n mynd yn ei ddweud. Mae'n fawr y fawr, mae'r fawr yn ystafell yn tynnu. Ond mae'r fawr i'r fan hyffordd iawn i... Mae'r hyffordd iawn i'r fawr yn y cyflwyno. A ydych chi'n gweithredu'r system. 90 of nodes, run up, 45 minutes and the whole thing comes alive, okay? Actually, birth of a cryptocurrency must be one of those interesting experiences that you don't get to do very often. And then it was lovely and boring. It operated, it went through its actual users' adding. We got an initial surge of people joining the network. We were panicking, would it be hundreds of thousands, would it be tens of thousands or whatever? And actually all our sizing was generous. We've reduced the size of the number of nodes since then because it was a little bit too generous. And then into November suddenly the wallets tripled. We have no idea to this day why so far. It goes from 500 to 1500 in under 24 hours. A few little alarms go off. We work out what they mean. We learn how the system performs under stress better and again it remains boring. So I think the thing that's been most striking about this is we in Cardano have actually starting to achieve the outcomes that we've been discussing today in this conference, which is this issue of constructing complex systems that you can reason about their performance because performance is critical these days to the integrity of the system, to its usability and to its cost-effectiveness. And those are the techniques along with the formal methods and along with all of the auditing that we're doing in the developing of the Cardano whole ecosystem that gives the longevity and sustainability to what's being built. We've had a great time today. We have gone over these issues of how to measure this stuff, the quality of attenuation. You know, you've all been extremely well engaged in thinking about what it means for the economics, the use of experience and everything else. Some of the questions being asked is how do I turn this low-level idea into a high-level outcome and deliver something that's got useful and business value. So a lot of the stuff I did in Delta Q in the original mathematics was actually at the turn of the century. So that's about 17 years old. We haven't stood still in that process. We've been refining it. And one of the things I want to talk to you today is about, not that, come on. Why did you fall asleep? Please, thank you. Ah, Peter. I'm sorry. This is the nature of technology. Here we go. So one of the things that you have, one of the is taking it up to that level whereby you can start to pull the application outcomes together, the notion of the risks and the hazards of not performing properly, right? And how you can start to both reason about those and mitigate them. So this hits the safety-critical communication side. But this is sort of a, this is a critical communication. And, you know, if I was given this talk six months ago, I'd say it would be worth, you know, who knows. We'll say how much it's worth in a little while, OK? Right. So the first thing I need to do is give you a very, very brief and enact and high-level view of what does it mean to have a blockchain. Lots of things have been said about them. What is a blockchain? Right. So the first thing is it's perfectly, in the case of a permissionless one at least, it is a fully distributed ledger. It's about recording history. No rewriting history in this world, OK? And it's in a fully public purview. Everybody can see everything. Well, they can see what's in the ledger. They may not be able to interpret what that means entirely. That depends on how you've engaged in that process. And this ledger is built up with a new page being added to it every so often. And the later pages, it's possible to rewrite a page, but once you've got a few pages back, no way, right? The amount of energy or the amount of people you have to put into a dark room and beat up or whatever becomes impossible to achieve. So it becomes increasingly immutable the further in time you go back. Each block references the previous block. This is how you create the history. And there's a bunch of cryptographic functions and smarts inside that process. And that becomes hard to change. Right. And inside that block is transactions. It doesn't really matter what they were. They were movements of things. And those things could be tokens which you might associate a value with if you're really that weird and like cryptocurrencies, right? So what happens is a new block is minted by a mind, depending on your particular technology, by a leader who is chosen by either a race, which is what Bitcoin does and uses it in the grief of work, or by some form of random selection by consensus, which is what proof of stake does. The difference between these things is this one, the race consumes for every transaction, consumes the electricity of an average Dutch household for a month, around about 40 euros for each of electricity for each transaction. Whereas this one is this sort of one, if you'd like to measure it, the lower one is five to six orders of magnitude smaller in energy consumption, actually complete resources of consumption. You can pay for all of the compute at that price. Now, the problem is this. In the proof of stake in these systems, what happens if a block arrives a little bit too late? First thing to say is the integrity of the system is not compromised, because any transactions that sat, where is this button I'm looking for, any transactions that sat in this block would have got incorporated in future blocks. But the performance is affected, because basically you've wasted this amount of time in construction blocks. So we now have an issue of, and if you can kill off blocks enough of them, you can start to make the ledger have a fairly, some whole pages where no page was written, and things can start to get a bit awkward. There's a whole bunch of mathematical proofs of why this is very difficult to do, et cetera. All of which I believe from the mathematical point of view, you've got to make the reality of the system correspond close enough to that mathematics so the mathematics actually holds. So the key issue I'm trying to analyse here, why I'm not saying that is how quickly can I communicate these blocks, this block here to the leaders so that it can be used for the next one to be built on. Does that make sense? The key temporal constraint is that these blocks have to arrive so the next one can build on it. Right? Very simple. It seems simple. So what you've got, so we've been talking about how you can turn this stuff into, you know, business terms and monetary terms and all the rest of it. You've got transactions coming in from the outside world. You've mint a block. You diffuse it to everybody else. Everybody incorporates it back out. And you're going around this loop. Nice, simple loop. Okay? And you've got a couple of, I would say, key performance metrics that the end user of these systems, which could be an end user making just a simple transaction or a bank moving a few billion between A to B, or it could be recording the fact that you've been issued a degree by a Greek institution or it could be, you know, a land registry in a country which doesn't have such objects and who owns the land is who beat you up the last, that type of thing. So what you're typically looking for in the particular system that we're using is after two to ten blocks, this is unchangeable, depending on how much risk you wish to take. Ten blocks, we're talking, you know, universe lifetimes of energy to undo the problems. So you're interested in how time it takes to become in the block and how quickly those blocks go so that the history can't be changed. And you're also interested in transactions per cycle, totaled at cycle time, through-puts, these types of things. And these all come down to, yes, you have to write the code, yes, the code's got to be efficient. There's a delta queue for minting a block. There's a delta queue for incorporating a block which sits in the software, right? And there's a delta queue for diffusing the blocks. So this talk, you know, today we've been talking about how that quality attenuation, how the performance characteristics of the systems work. We're now talking about a system that is intended to be of global scope, to be run with no centralized control, no centralized authority, and we wish to be able to build a system that we can rely on in those circumstances. All right, now, 50 pictures, a bit more of exactly what's going on here. There's the bits that you want to incorporate, the old ledger and the bits you want to incorporate, the block minting. This send block is somebody who's chosen to be the block leader, right, by a random consensus algorithm. And so that green one is one person, all these grey blocks at the bottom are all the other people who are interested in the system. We need to do this so that 99% of all the potential slot leaders get that block in time. And we're going around that loop every 20 seconds. And we have been going around that loop every 20 seconds since the 1st of October. Okay? So, in relationship we know how the underlying performance of things and we talked about them in relation to, you know, we need performance for people to do safety critical functions over things. We need performance so that we can manage costs, performance management in order to manage costs, etc. So, one of the things that's really interesting about this approach is that there is not going to be because of the way these cyber currencies are viewed is there should be no hedgemony. There's no central authority who has any control here. Now, if something a telco thinks it has over its networks, it may only think this, not actually be its actuality, it feels it has control. And governments think they have control over the things inside their estate. We're now building systems which are critical where no one has that control. We have to assume that bad actors exist. Right? And those bad actors may range from script kiddies looking to find a fault in your software and steal, I think it was how many millions were stolen today by somebody from somewhere on Bitcoin, right? Somebody found, right? Those things are being dealt with by formal proof and by formal analysis, those are being looked at in those ways. It's the only approach, the only... No one else is doing it with that rigor. But we're assuming that they can do things with the influence they can muster. So we're doing this in the presence of adversaries. We have left the safety critical role of doing communications. We're now into battlefield communications, right? Which is a level up from what we were talking about earlier. There's no fixed apology. Who owns the stake and therefore who will be elected the block leader whether they got the machine switched on or off is changing constantly. Right? Imagine you're trying... For those who are operators who build networks, imagine that where your infrastructure is changing continuously and you're still supposed to provide a service. Right? And we can't fix a topology. Cos if I fix a topology and say I'm always talking to Peter and that information becomes available, then that exposes a risk that can be used by bad actors. So you've got to have a sense of randomness of choice. I'm just trying to introduce the problem space. Hopefully this is all sort of making sense. If you've ever built or thought about a complex system, these are the sorts of things that are going. We are found that we have to trust IP at the moment. But actually we can't just trust TCPIP or one TCPIP connection because basically it's too unreliable to do anything even this critical on. Because basically it has too many single points of failure or points of failure and it's too slow to converge when things go wrong. And we were discussing earlier the notion of how companies are building their own SD1s how data centres are now making available into data centre networks to try and resolve this. Amazon last week launched exactly a product like that where you can now connect all of their internal virtual private accounts together globally. They're now taking over the role that the traditional telcos did because basically they found a market opportunity. Right. So, what I'm basically saying is normally we draw nice fritty fluffy clouds. And we have little clouds of clouds and that's the level to which people think these systems are at. Clouds of clouds. We can't deal with that. Clouds of clouds we've got to walk away from the cloud because the clouds got too much mist in it and we can't see what's going on and our feet get too wet. We've got to start thinking about actually how we are connected to other neighbours. So we now have to think away from a cloudy thing where the cloud with its magical pixies and all its magical TCP engines will make everything good and proper for your application into something where you've actually got to take some responsibilities. So what you're seeing here is I am connected but I'm actually overlaying on top of what I've got some notion of connectivity. I'm creating point-to-point links in order because I can't trust the cloud to do everything for me on time. This is where we were discussing earlier about issues of vertical markets doing other things going their own way the trust isn't there the delivery isn't there therefore you have to construct some sort of overlay network. It's random. It has to be random. Because if I make it predictable it becomes easy to disrupt. Depending on your threat model easyness of disruption is an important aspect. The threat models in this space are legion. Everything from large criminal gangs who want to pervert the way the chain is being worked to state actors who don't like the currency in their country to individual people ganging up on people they don't like because they were dissed in a chat. All of these things happen. But there's a problem with this is that topology is not topography. This lovely pretty picture of all these things connected here doesn't actually capture what's essential to my problem. I've actually got to start worrying about how this is laid out in the real world. I have to count it's laid out across the globe. I need to understand how close these things are how quickly they can exchange messages in this ever changing random graph. It is ever changing but not necessarily massively fast let's be honest. Suddenly you've now got to turn around and say how do I construct systems that are not centralized entirely autonomously run in a random topology which is changing relatively quickly that is spread around the world and is exposed to the risks of direct attack by adversaries and the risks of sort of correlated attacks by acts of nature like the North Atlantic ridge moves and basically half the fibres across the ridge get cut that happened in the late six somewhere in my youth that happened or we suddenly get a coronal mass ejection from the sun that basically causes us to have to shut down a third of the world before the electronics get fried and we all have to go back to the Middle Ages for about a week or if you're North America six years I refer you to a Lloyd's register report about this. So right global scale what we've talked about the ability of delta Q to measure we've talked about it about being able to manipulate things what I'm talking about here now is about it capturing something that which is the salient properties of the real world if I know the quality attenuation from A to B and B to C I can actually work out the quality attenuation from A to D from those pieces of information and if I know it going in the opposite direction I can start working out how much it's going to cost me to move these data around these pieces of data around and that looks like a lot of information but it's only one point of information per link and this is relatively simple to do in computational terms but now what's happening is you're exploiting the mathematical properties of this stuff that you can convolve it as Martin said in the very first slide to do something useful to start helping you make decisions ok so let me just simplify a small part here one of the strange things you might find in networks is that you know if those who've ever done anything like this you build this large amount of connectivity which you then run a routing protocol on which throws away most of the connectivity right because it forms a route it basically takes the everybody says when everything is connected to everything at the lower layers most of that's not being used right there's a huge cost in the system for those things just because your routing algorithms are basically pruning the things to a shortest path first or equivalent system so what is the best now what I want to be able to do here is how do I calculate this and the point is with the delta gear target I can do this I can work out all the possible routes and it's a quick exercise for the reader which are the full routes yes you can see them that's the obvious one but there's also and what's really interesting is the smallest number of hops does not imply the earliest time to delivery so suddenly sending the data off in one direction which gives points for more hops actually gives you a better service this is one of the issues and this is stuff that and you think of course it must if you think about it it depends on the delays next slide so you might think okay great that sounds really nice Neil what does it mean here's a real example I can tell you what some of these are this is the time between Ireland and London just here 5 milliseconds of G 33 microseconds per octet per octet of S right here's actually going almost right round the world once as a single path takes 365 milliseconds with 148 these are numbers you can construct matrices you can actually start building numbers so we've been talking about the value for this stuff the reason why I got involved the reason why it's delta Q is the partially the reason why delta Q is the way it is is because actually as a computer scientist and a mathematician I knew I needed something that would form an algebra I knew I needed something that would actually made sense when you did arithmetic on it and composition on it I could not put bandwidths in this matrix because it doesn't make sense but the fact that a lot of people don't understand the difference between a number and what that number means so these are numbers that mean things and actually have appropriate properties in appropriate algebraic structures in order to do say matrixes on them right thank you very much so the thing that we're actually doing so the thing that we're actually working on at the moment is we're taking this issue of measuring these delta Qs on a hop by hop basis out of this random graph so that we can do something and we're doing this it's a local pairwise measure so we're taking local information no global no global manager here we're taking it to share that information regularly there are issues to do with people lie people myvaskarade all of which we are dealing with understanding how we are actually building models of trust of this matrix into these spaces you can then calculate this magic matrix of how things move and from that work out the key piece of information which is this there if I want to get from A to D the route I go is to start at H what you need is you know it's the standard joke you know how do I get from somewhere x A to B well I wouldn't start there from here if I was you right it's the appropriate intermediate point to start and what that means is we can actually by doing this know how long it will take those blocks to diffuse to the nodes in the system and we're no longer trying to necessarily minimise the diffusion time we're trying to minimise the maximise the fraction that will get there in time back to the original objective is to maximise the original objective is here we go original objective which is there we're trying to avoid that very hazard that's the hazard we're trying to avoid getting things done in time and you might think well this is pretty technical it's just a blockchain but getting information to another actor in a space in time is the critical thing almost for everything we've discussed today it's basically don't shoot that person in the battlefield do do this in an emergency here is the next block of data for you to see in your video demand just before the buffer emptied the people talk about delay but what they really mean is they need the outcome of interest in the time frame that is suitable for their application on all I'm describing here at one level of abstraction is that property yep beautiful now what's really interesting here is what DeltaQ also tells you which is probably a little bit mind blowing is how long it takes to get somewhere actually dispends on its size so no longer is there a single best route it depends on how big the object you're trying to move is in this case so you now end up with I may have lots of possible connections and I may have a routing protocol that will take all by one of them out and what it's stopping me is actually making the system more efficient because actually if I have a small packet and I have a path that has a big S but a relatively small G I can send it that way if I have another path that has a bigger G but a smaller S there will be a size at which it is better to send it the other way right now wait a minute what we've just done there is said the underlying infrastructure that we've ignored because it wasn't suitable to us because we lost it in our routing now becomes available for us to use providing we know how to safely use it which is what the information you get so a simple example is you might it might be optimum to smell small blocks information over the DSL link because in general the latency is much smaller but if I have to send a big block I can open up a 4G pay the penalty of the extra extra G but the S is so much smaller that the block will get there faster right and that you know if you go and take that to your network engineers that would just basically they'll all sort of just cold kill up grab their knees and rock back and forth gently right because they'll become catatonic in that process because suddenly you've given them complexity they really can't deal with and one of the things is it just falls out in the wash right this is just purely a matrix calculation you know you teach A level kids how to do matrices it's no more difficult than that and then making a pick right so we're doing this we're doing that and then you can choose which one you're going to do so what's interesting to me is we were doing this we started working with the script of currency with Cardano and IOHK to look at the issue of large scale performance because it was clear that if the performance was too bad then that would affect the performance of these systems what we're basically finding is we can go and take the existing public infrastructure right find alternative roots in it find better ways of which route to use for which size packet work out a whole bunch of mitigations for attacks because we can reason about if I send a packet out this way and the network's gone down I should have seen it back over here if I send a block out this way I should see an indication of the block from this pier over here within a certain amount of time which I can calculate and if that's not true I know something's gone wrong right and the first essential thing for making a mitigation action is detecting the fact that you need to mitigate in some way we've talked about delta Q in the real details what I was trying to give here was people have asked me would you take an application and look at it what we've done here is taken this application which is actually coming back from here quickly this application right which we have looked at this particular part by here worked out how much we can do we can measure we're doing more than that measuring what we're actually doing is incorporating these into things called a process algebra so we can predict and manage these objects so we can't just measure this stuff and work out how it will work we can actually incorporate these numbers in the delta Q as we start and we've turned this whole problem into something that we can now put numbers on budgets on and give to you know the network team this problem the core team was doing the issues of how you do transactions in the logic layer and we've got a particular problem and say please try and reduce the amount of time you're consuming the resources you're consuming and by the way if you can get it down by this much we will all be much happier because we can go around this loop instead of 20 seconds in 10 and suddenly we have doubled the throughput of the system right we've been talking also about using the one of the things that I'm you know one of the things that's interesting about information and quantification is if you can construct appropriate metrics that have appropriate meaning right you can do a couple of things one is you can engage you can split the vertical problems and the horizontal problems into manageable groups because this is about giving people budgets the other thing is you can define layers in which the the management chain can be managed over and then you've got fungibility as they say in the trade you can actually replace things with other things right nobody's having to go at me so here we go quite, last one so we're coming to what we're doing is saying we talked about all the practical measurement things the fun stuff for me as a computer scientist is I can now de-risk my application developments I can basically write things down in the afternoon I can think about them over a cup of tea and I can work out roughly how they'll perform to shall we say one or two decimal places right I know whether the solution I have proposed is one that's viable and then in fact don't like that universe of discourse I can move on to another one to half a dozen of them still be home in time for tea right this is always a joke when I work with people at CERN I could do half a dozen universes a day they just had one I thought I was in a better position right okay we can use this so what we're doing here is we're not just reading about the application out of performance but we can actually reason about how many of those blocks will be too late right so now I can start reasoning about the system integrity right which is you know this is coming to that station it does what I just try to illustrate here and I realise it's a high level view we can talk about the numbers exactly what you do just try to illustrate how the information is flowing the delta q becomes that information you flow around your design and around your build so that you can construct these highly distributed systems and we are talking a highly distributed system that will have minted I don't know hundreds of thousands of blocks at this point it is 14th epoch and an epoch is 21600 or 26100 I can't remember slots multiply that by 20 that's the time it's been running right it's the daft thing about this stuff and I don't understand it at all there's the value for this stuff the people perceive the value of this that's the only thing I can say in terms of the capitalisation of this system it started off at a quarter of a cent anader which is the unit of currency here right when it was first launched when it was launched it was 2.4 cents anader I think that was in October and suddenly it went up and that was a mere half billion dollars capitalisation and then something happened which none of us understand but people got interested in it and suddenly it was 12 cents anader 3, 4 billion capitalisation that happened in 36 hours right the system scaled the system grew there was a few little alarms that the CPU was a bit busy over here so when you have a business approach you know the key metrics you have to manage to because you know what's going to affect the outcome and therefore you ignore some metrics you'd have a bit of a panic and they say oh that's not important right so it carries this basis for constructing high integrity distributed systems which we've all agreed is an essential part for our lives human infrastructure as we go on and it enables real network networks it enables new ways of thinking of using the underlying resource so I would argue there's a lot of unused asset sitting out there that can be done and used and used in better ways which is good for the people who've paid for those assets and bad for the people who think they're going to sell new ones into those people