 Deutatil yma. Dwi'n meddwl i ddim yn cymryd i fy nglaesu mewn hwys. Fyddech chi'n sylwedd fel cydd-ddiol. Mae wedyn oedd efallai o honno draws. Rwy'n ffrwyllio'r bwysmaidd, dyma'r Llyfr Dynoedd CARDANO Rydyn ni'n eich gofyn o'i gweithio dweud rhywun wedi gweld â gweithio'r gyd, a'r Llyfr Dynoedd CARDANO i'r cyd-ddiol yn gweithio'r ganddîm i gyd yw'r gofyn enghraith oherwydd y partigedd o'r Remudaeth CARDANO ac yn ddweud i'r gwaith y gallwn y gallwn gwneud i'w gweithio a'r gweithio'r ysgol. First up, I'd like to get a show of hands just to understand who's in the audience. What sort of mix have we got here? Have we got some developers in? Have we got some people who are working develop? Get a show of hands. Wow! How about that? I'm so delighted that you've come. This is fantastic news for us. We really want to get you guys engaged and make sure that you've got the information you need to start. Thinking about developing on the platform, understand the platform better. That's why we put this on today in London. We hope it's a great deal of value to you. I mentioned who the Cardano Foundation are. If you haven't been to one of these before, we did one of these a little while ago. I did a bit more of a deeper explanation of who the Cardano Foundation are and how we fit in. There are three entities in the Cardano project to cut a long story short. We have a segregation of duties and a kind of separation of concerns between the three parties that allow us to operate independently as a team. The Cardano Foundation are responsible for the protocol, brand and community as a guardian role. We're a long-term entity looking after those things. That involves various things, including meet-ups, creating and developing a community, including holding IOHK to account, to some extent, with some of their development work. We've done, for example, with FP Complete recently. If you haven't looked at that and you're a developer, we'd really love you to take a look and give us some feedback on what you think. The FP Complete audit of the build of the Cardano settlement layer so far I find very interesting, even though I'm not a developer, and welcome some more debate on that. That's the role of Cardano Foundation, IOHK, clearly more in the technical lead role. They are the visionaries of the technology road map and very much a driving force in the project. We also have a Mergo who are responsible for venture building and creating application environments going to sit on top of this protocol in the future. If you have development ideas, we have a road map for you to take those ideas forward and we have partners that we can introduce you to and we'll be happy to talk with you more about that outside of this. I can't stress enough how much of a delight it is that we've got so many people in tonight and that you're here. We really want you to be able to interact, get the information you need to make decisions that are going to help you to develop and create a brighter future for us all, really. I think I've said enough. I think that's probably enough. You happen to take some questions at the end of the session if there's anything I haven't made clear or anyone wants to talk to me about. I think I'll hand back to John on that. Is that okay? Thank you, Bruce. Just to give a little bit of a structure of this evening's proceedings, I'm going to be introducing the main event, the main speaker, the main man shortly, Duncan. I'll give you a proper introduction in a minute. Duncan's going to be speaking for a little while and also with his colleague Neil. We're going to be having some open Q&A after that. We're then going to be going over to our panel. I'm going to be introducing the rest of the panel later on and we've got some questions that have been posed by the community online which I'm going to be asking those guys and then we'll also open up for a little bit of Q&A from all of you here today. I would love to introduce Duncan Coots to you. Duncan is the director of engineering at IOHK, one of the absolute geniuses working on the Cardano platform and I'm delighted that you're here today, Duncan. We really appreciate giving up your time. We're just going to do a quick laptop swap and while we do that, if you'd like to give him a warm welcome and a clap, that would be fantastic. Thank you very much. Thank you very much everyone for coming. So yes, my name is Duncan Coots. I'm so surprised to see quite so many people from the Haskell community in the audience, so many of you already know who I am. For those of you who are more sort of been following Cardano, you might know me on the Twitter box as that hobo or Crypto Jesus, that's the other one that they... So my background is actually in computer science and Haskell and functional programming. I've been doing Haskell consultancy for 10 years and I've now been involved with this project for some time. Neil, would you like to introduce yourself? I'm Neil Davies. I've been doing computer science for 40 years, 35, 40 years. My background is in partly academia but also in high performance computing and scalability and safety of life systems. So that will give you some... that will show you how the split goes a bit later on. So yeah, we have this title of past, present and future but basically I've decided it's an excuse for me to talk about the things that I think are interesting and hopefully you will find some of them interesting too. It'll be slightly awkward because some people in the audience are, you know, Haskell people with lots of academic background and some of the things I'll say will be very noddy to those people and some of you will be the other way around. I'll be having to explain some of the weird cryptocurrency stuff to half the audience and some of the weird technical stuff the other half. So there'll be bits that might feel a bit noddy to you. So okay, just a very brief history. This project started mid-2016. The first Git commit apparently, I just checked today, was in September. Then April was the first beta testnet... No, the last beta testnet. I came on to the project actually... Let's see, when was it? February 2017. So I wasn't in from the very beginning and so I came in sort of in the lead up to the mainnet release trying to help the team get the mainnet release out the door. We had the last... the release candidate in August last year. The final mainnet release was in September to great relief from all of the development team and then March this year we've had the start of our kind of processes doing rolling releases. So we did do all these sort of big bang releases and now we're doing this idea of... it's a standard software development idea of do releases on a time-based schedule. Whatever features are ready goes into the next release and you try to just do it on a schedule. So that's the history. The project, right? But there was a lot of... and as you can see it was a very short period of time to get to the mainnet release. Which means it's a sort of start-up-style approach to things and that has had some interesting consequences which is partly what we'll talk about. One thing in particular is it's useful to understand... if you're not familiar with cryptocurrencies it's useful to understand the role of exchanges. So exchanges are like foreign currency exchanges that deal between different cryptocurrencies or between sort of fiat currencies and cryptocurrencies. And so they're very, very important for a cryptocurrency. A cryptocurrency has no value if you can't trade it for anything and one of the important things to be able to trade it for is other currencies, you know, pounds, dollars, etc. Since at the moment you can't buy lattes with ADA. So we started with one exchange at the mainnet launch and then added a second a bit later. And exchanges are awkward beasts, it turns out, so we discovered. There's currently about a dozen I checked today that have significant ADA volume. The 1.2 release which is becoming out, I think, in the next few weeks will have a much better API that will allow us to onboard a whole bunch of new exchanges. So we're expecting to have a whole bunch of exchanges that will be integrating our cryptocurrency wallet. One of the interesting things about these exchanges is they don't behave the way you expected them to behave. You know, you have this mental model of how they're going to interact with your API and they decide to use it in an entirely different way, which causes you to have trouble. So I'm seeing some nods here, because actually one of the interesting things, I'll talk to you later after Duncan, and there was no specification of how this was going to be used because no one had really done it before. So it's an interesting game to extract the specification and the loading factors as you're building the system. Yeah, that was a bit of an oversight. I don't think the exchanges could have helped us. They didn't know themselves. Some of them are not very good at communicating what they need. But we all know about how customers and how well they communicate. They want things to work, but what does that mean? All right, so the present. So there's lots of lessons that we've learned from having built and deployed this thing, and some things worked out really well, and some things not so much. Now because I'm a software engineer and a computer scientist, I sort of focus on the negative things, like what went wrong. But it's useful to note that actually a lot of things did work out really quite well. I always focus on how things are terrible and we should do things better, but a lot of things did work out really well. The core system stability has actually been excellent. There's been no downtime of the core system. To the extent that we know of the meltdown spectabugs, which we then found out in January, we were trying to work out why the hell everything had performance requests in December. We were panicking about what we had done wrong, and it turns out that we'd done nothing wrong. The kernels were being changed under our feet. In hindsight, that was a really good thing to know. And we only discovered that because of the system monitoring. Yes, because we built systems to, we built it like a sustainable long-lived system. So we had a very good DevOps team who set up all the appropriate monitoring and probes within the code to see things that are happening inside the code. Not just system level metrics, but, well, that doesn't matter. Performance. This is an important lesson. This kind of stuff is all obvious, and it's obvious in hindsight especially. If you start a new project, you would do this kind of stuff from the start, but it's not always the luxury that one has. Performance requirements really need to be clearly understood from very early on. It has to be part of the requirements-gathering process. You can't engineer in performance later. It's not just build it and optimize. And also, you don't know the offered load. You just don't know one of the... There have been some sleepless nights, right? But there have been sleepless nights, not because the system was about to fall over, but because we were having difficulty dealing with onboarding 50% new wallets in 24 hours, when we modeled the fact that we'd never grow that fast. We made some... We had to make judgments. We made the wrong ones. There was no history to deal with it. And so, again, when you should think about performance in the project, which is at the beginning, because otherwise I spent several months working on trying to measure and benchmark and improve the performance of the system perhaps one of the things that meant that the mainnet release was later than all the users really would have wanted, because it's very expensive and difficult to do that kind of stuff towards the end. It's much better to do that as part of the design at the beginning. But in a start-up where you're trying to do these things very quickly, that's not always the luxury that you have. Another sort of obvious, again, in hindsight, for anyone who knows this, distributor concurrence and networking are hard. They're really hard. As you need to take it seriously, which means hard problems need to be solved with greater formality. More formal methods are appropriate to hard problems. You'll soon find out what the diameter of the world is, which is, by the way, 650 milliseconds. That's how long you can start a packet off, pass it through half a dozen Amazon nodes in different parts of the world and get it back coming in the opposite direction. Those are the sorts of things you're dealing with in this scaling problem, because you're trying to get a global consensus across multiple nodes and deliver performance at the same time. We'll talk maybe a little bit more about what sort of formal approach we're looking at for performance a bit later, because that's actually really interesting. As an example, the wallet backend, for those of you who are more Haskell people, a cryptocurrency wallet is a bit of software that observes the blockchain and tells you how much money you've got, and it also lets you create transactions and submit transactions. The deadless UI that you see is a front-end, and behind that is the wallet backend. We had lots of problems with our wallet backend, not with its use with deadless, but these were problems that we had with the exchanges using our wallet, because the exchanges use it in a very different way to the way that an end user using the deadless UI uses the wallet backend. The wallet backend had really been designed with deadless in mind and then was being used by exchanges, and the code proved not to be up to the task of dealing with the use case of exchanges, primarily to do with performance, but also scalability and other thorny issues. To the extent that I decided that it was appropriate to throw it away and start again, basically, with the wallet backend, and that's what we've been doing since the beginning of the year, and we've now got a 30-page semi-formal spec and a nice implementation coming along, which I'll talk a little bit more about in a moment. One of the interesting observations about how these blockchain scale and how the wallet scale was, how the exchanges used things like change addresses and how they set up their scheme for managing their users could exercise different parts of the code in entirely different ways than we expected. So it's not yet clear, and I don't think what's the best way for an exchange to run itself, but what we do now know is the rates at which we can post these transactions, and we'll talk a little bit more about that in a while. A sort of relationship to that is the people who are writing the wallet backend at the beginning did benchmark things. It's not that they didn't benchmark it, but they benchmarked it with the deadless in mind and they weren't thinking about asymptotic complexity, which is, for those of you who don't know, a bread and butter, a bit of computer science that you want to apply at the beginning as part of your design. So that's what we're doing now as part of our new specification for a new implementation. So one of the things that was really important about this, you're trying to build something that has contains and maintains value, and that is really all about trust. Do people trust the system? Will people place their assets and their value in your system? So the idea was to make the system a bust. We basically designed it to survive. We designed it to survive a whole bunch of interesting outcomes, like we designed it to survive a mid-Atlantic earthquake, and you may say, why do you need to survive this mid-Atlantic earthquake? Because when one of them happens, it could cut all of the fibres between the Europe and the United States. What happens? This system will survive it. It would survive a ring of fire wiping out a couple of data centres in different countries. It'll even survive a character event which wipes out North America. If you don't understand what those are, I'll let you worry about them and have nightmares later. Okay, look it up. The really good report is by the Lloyd's Register, a very well-known scaremongering outfit that supports Lloyd's of London. So we actually considered those things to the extent we can actually shut this blockchain down and restart it should we have to shut the world down for a couple of days because of a large coronal mass ejection. Now that sounds like a bit of over-provider, but this is the sort of projects I've worked on in the past that we had to consider. We took those hazards, we took those risks and we put the mitigation plans in. I've worked with people whereby you really have to do risk and hazard management. We've got that slowly incorporated into the system and the beginnings of formal processes to sign all those off. But if this stuff is going to replace some of the systems we already have, then you have to take this kind of stuff seriously. You've got to take this stuff seriously and eventually you're going to have to go to Lloyd's of London and ask for insurance. And the point you ask for insurance, they're going to say, tell us all about your hazards and your risks and your mitigation strategies. And we have the beginnings of all that documentation in place. We've also beaten it to death in benchmarks. Yes, yes, we're there. And actually it was really, really difficult to kill, which is actually good news, right? We have, and from that we now know, well, we found a few things that we made better. We constructed a better survival approach from things like DDoS and various other techniques. We basically sidestepped the DNS because it was a major risk and I'm sure there's a whole bunch who did have ethernet ether in there, my ether wallet this week who wished that they didn't have the DNS in their process. Because of what happened there. And we tested it. So now we know, basically, it's about a mean time to get something into a chain of about 11 seconds. You could only do, it takes about one second to do all the processing to get the stuff onto the block. The other 10 seconds is because we've got to wait for half a cycle time. It's between zero, but one second and 21 seconds. Our uniform distribution. We also implemented multiple transfer. It was already there in the underlying settlement layer, but not in the wallet. We added to the wallet the ability to do multiple outputs for the transactions. So we're now in a position where this system could actually, it can handle the load of the faster payment system and CHAPs and BAGs for the United Kingdom. It can basically do that in its current instantiation. That's only about three million transactions a month. But if you assume that banks don't send them one by one but put batch them together in small groups of five or six, then that's enough. That's that amount of performance. You could be settling information in minutes or value in minutes. I don't know anything about the BAGs and CHAPs. Not back dot CHAPs, but BAGs and faster payments. They have a three-day settlement cycle. I can't see that bit there. Yes, reliable. It's been running 24-7 since we launched. We've not lost control of it, which is a really good statement. There's one guy from there who's just smiling there because we haven't even got close to losing control of it. We have experienced how reliable the cloud computing is. Basically, once every three years, on average you've got about three years of uninterrupted running and then something goes wrong. But when you're running 100 nodes, you see that on a weekly or monthly occurrence. 95% of the transactions are there in the next block. Actually, it's higher than that. I was just being conservative in terms of 95. We've just gone over 900,000 transactions at blocks in our system. We're operating at 30 times the rate of Bitcoin. Actually, we're beyond the Bitcoin lifetime. We've now doubled Bitcoin's lifetime. You mean in terms of the number of blocks? In terms of the number of blocks. Yes, because we have blocks every 20 seconds. We're running 30 times faster. Rather than 10 minutes. We're also able to see some of these issues of scaling that will hit slower systems. I want to talk about... This is the present now of things that we're trying to improve. We're working on it at the moment to improve. The example here is the wallet back-end, which I mentioned a moment ago. The wallet back-end was written... ..in a fairly traditional style. You build it, you test it, you test it again. To achieve better quality, you have to take a different approach. If any of you have seen any of the videos I've done, I keep banging on about high assurance and stuff like that. Formal methods and that kind of thing. I want to give you a slight flavour of what I really mean. The example here is the wallet back-end, which, as I said, is the Haskell component that sits behind deadless, or alternatively, with the exchanges. We basically threw away the implementation and said let's start from scratch. Let's start from scratch with a proper specification. What do I mean by that? This specification is precise. It's in a mathematical notation and style. Doing this forces you to think clearly and to simplify things. You fiddle around until you find the simplest possible way of doing something. In the end, you're aiming for a spec where it fits on one page. That success, compared to something sprawling and big and impressive, but no one really can understand it. The goal is to make it as simple as possible and having to write it down in a precise way forces you to do that. You look for different ways of thinking about the problem until you try and find ways that are like, that seems to be a nice local optimum. That seems to make sense. Doing this approach forces you to address problems that you might have previously been able to sweep under the carpet. For example, if you've got a... This is a genuine question. I have no idea how Bitcoin does this, or Bitcoin wallets do this. What is the meaning of your balance when you have received some incoming transaction to your wallet? You've spent on the basis of that. That transaction is now pending, and then that block got rolled back. What is the meaning of your balance in situations like that? You can construct arbitrarily more complicated situations than that, where there's big graphs of dependent things that are no longer in the blockchain depending on other things that are in-flight. These turn out to be very complicated situations. People like exchanges want a simplistic answer of what's my balance? It's like, well, it's not obvious that that question has a really obvious and clear solution or meaning. Writing down these kinds of specifications forces you to think about that, have those conversations and say, what is the meaning of my balance in that case? If I have made a new transaction and it's in-flight, what does that mean? Presumably that ought to decrease my balance, but it hasn't actually been confirmed in the blockchain yet, but it might do. I signed it and it's out there somewhere, but if that depends on something else, this gets very complicated. We've had people, they've expected their balances to be monotonic because they were receiving inputs, but actually they're not because it depends on the circumstances. Because blockchains are eventually consistent. They're eventually consistent, but not instantaneously inconsistent, and they're assuming instantaneous consistency. There are whole issues like strong sequential dependencies that people are assuming in the software they're wrapping this blockchain stuff with. I guess that if they're making that mistake, others are making the same assumption of mistakes about this generic area. Going through this process forced us to have those kinds of discussions, which we had never been forced to do when you were writing the wallet the first time round. I suspect there's a lot of other wallets for other cryptocurrencies out there which have also never thought about those sorts of questions. There will be these weird corner cases where the meaning of your balance is like, I don't know. People don't like that for some reason. Don't even ask about tax liabilities and capital gains and things like that during these objects because that's the next problem we've got to deal with. What does this mean with respect to what happens if it occurs across a midnight boundary and this decides your tax status? Yes? I don't know. We haven't got a suitable account in the end. You can think that broadly. Taking this approach forces you to try and have precise answers to those kinds of questions. If anyone's seen any of the videos I've done on the IHK website, I talk a lot about formal methods. Here I've specifically been semi-formal. So what do I mean by that? This is something that is less formal, but quicker and easier to do. So it's somewhere between traditional software development which I criticise as being quick and dirty, and very slow plodding, very careful, methodical, formal approach. Semi-formal is some place in between. It's a trade-off between those two things. So in this case, semi-formal means that we do have lemmas in our specification, theorems and lemmas, but we prove some of them actually, but not all of them. It's certainly not comprehensive. We're not proving that the implementation that we write based on that specification, we're not proving that implementation meets the specification, but we are testing it rigorously and we have a clear way of... It's easy with this to then see that the tests will make sense, that it will give us a high degree of confidence. And this approach leads to dramatically simpler implementations and more robust implementations, which is in contrast to the first version of the wallet we wrote. I just want to very briefly show you the kind of thing I mean here. This is from... So this is from our 30-page wallet specification that my colleague Edsgo and I have been writing over the last few months. And where was that? I just got to this bit here where not everything here will make sense, but the point is this is more or less the full specification of any cryptocurrency wallet. This is not actually Cardano specific. On basically one page, we are asked about fits on a single A4 sheet of paper. There's a few definitions that you have to refer to previous pages and blah blah blah and let's make it a little bit larger, show me? That's not okay, that's even worse. And this is done in a, as I say, sort of mathematical style. This is sort of vaguely haskellish to people that read Haskell, but it's more kind of set theory than Haskell. Intersections and sets and funny arrows with slashes through them. But the point is that with just these actually anyone who's done a bit of mathematics or a bit of undergraduate computer science will be able to read this and see what these things mean and see the definitions. And the point is we've got this down to a very small description and that is really good. Small, simple descriptions. So we've only got, we've described what are the operations on a wallet? And how do those actually work? And this is radically simpler than the implementation that we started with and the implementation that will be based on this specification will still be radically simpler than the previous one. Another interesting thing about doing it this way because I've done this sort of stuff in other areas is it permits you to have conversations with people about what they expect. I mean, understanding people's aspirations and intent and requirements captured is always a problem. And if you're down to this level of description which a person... Up to this level of description. Down to this small size. They can put the hour and a half in they need to learn and understand this and build an intuition that you can then work with. That's so important because then you have less disappointed people in the process. So a concrete example of this is that IWHK has a lot of cryptographers academic cryptographers who write papers like the Ouroboros paper etc. But they are not Haskell programmers. So to them this big blob of Haskell code is a bit of a black box and all they can do is kind of ask us questions and say, well, does it work like this and we sort of hum and ha? But they can read specifications like this. So that, exactly. It's an intermediate form that serves as a communication for what is our thing doing. And that's really important because otherwise you can have misunderstandings and the people who write the papers and the people who are implementing them and this serves as an intermediate point. Oh, yeah. There have been plenty of things where the researchers thought we were doing one thing and it turned out the implementation was doing something slightly different for better or worse. Okay, so there's a few things about what we're doing at the moment and what's coming down the pipeline. So we've got two smart contract platforms I suppose, really. So smart contract is a terrible misnomer. Most of these things are not contracts and many of them aren't very smart. But they are programs that run on the blockchain or applications that make use of blockchains and that is there. But that's a longer term and people like the term smart contract. So we are pursuing two approaches which consist of two different execution environments and corresponding sets of languages programming languages. So the first prong covers the legacy by which I mean Ethereum and sort of taking a traditional and in this sense traditional also means like Ethereum kind of approach and that means that it will be possible to carry over smart contracts that people have written for Ethereum that use the various languages that people have written for Ethereum that compile to the EVM of the Ethereum virtual machine. So we have two VMs that are like the EVM both are implemented using the K framework I won't go into details about what that is except that it's another thing that comes out of academia. And so the K EVM is a direct implementation of the EVM using the K framework which means that we have a much, much higher degree of confidence that it corresponds to the specification of the EVM and there are tools that demonstrate that. And then Yele is a sort of derivative of that that fixes most of the most obvious glaring flaws in the EVM. I mean the EVM was designed presumably rather quickly and by people who probably didn't have a whole lot of PLT experience and so there's a lot of things that you would do differently now and Yele is basically doing that but in a way that's compatible so it's still possible to compile Solidity to Yele, for example. Although there will be some subtle things that might be a little bit different mostly sharp corners taken off but the K EVM is a bug for bug compatible version of the EVM and Yele is like fixing the most obvious things which means it's not 100% compatible but it's a better target for your writing new Solidity programs. So that's kind of the traditional approach in terms of languages and in particular it means Solidity compiles to those platforms. And then we have a slightly more radical approach at least more radical in the cryptocurrency world I mean the Haskell people here would say that's not radical that's blindingly obvious and many of us are indeed PLT Haskell people so it's like well let's do it like that. I wouldn't surprise anyone in the room who knows anything about Haskell. So what do we have there? We've got a language called PLT's core which is an intermediate language a core language. If you like it's equivalent to the EVM byte codes but it looks more like program text than byte codes. It turns out there's no real need to have byte codes. People think you need byte code machines for performance but these systems don't really need to be that performance so you can actually make it much simpler than byte code and EVM style. So PLT's core is our intermediate language it's based on system f omega for those of you who know what that means then we have two higher level languages which compile into that one is called PLT's which is very Haskell-esque it's a purely functional language and then Marlowe is a DSL which is not Turing complete and it's specially for writing financial contracts it's based on Simon Pete and Jones's contract language from 2001 but adapted to work in the blockchain setting and that's actually rather interesting so the combination of Plutus and Marlowe gives us a general purpose Turing complete language so you can write anything but some things might be more complicated and then Marlowe is specialised to one particular problem domain financial contracts, even exotic financial contracts and so it can't express everything you can't use Marlowe to write poker but for things that fit into its domain the Marlowe programmes are radically shorter, simpler easier to understand, easier to communicate easier to analyse so this is the trade off you get with language design you can have specialised languages that are brilliant for that problem domain but can only really deal with that problem domain and you have general purpose languages and so we're providing at least one of each in the future and may have more than one DSL for different kinds of problems so this stuff with the LA and KVM fits with the SLCL split for those of you who don't know so much about Cardano the concept is that if you look at Ethereum Ethereum bundles everything together into one monolithic system so it includes both the blockchain and the cryptocurrency and this very sophisticated and complicated smart contract execution platform which probably has a very large attack I mean it does have a very large attack surface and probably has lots of flaws that we haven't yet discovered so the idea of the CLSL split is that you say let's have two layers which are sort of in isolation from each other or somewhat isolated from each other so that if one fails catastrophically it doesn't take down the rest of everything so in other words we can run the EVM over here and then if people discover some catastrophic flaw on the EVM well it'll take out that part of the system but it won't take out everything whereas it's compartmentalisation whereas in Ethereum you lose at that point whereas we lose a limb but not you don't die let me show you the diagram because that explains it rather better so you've got the computation layer so these are all blockchains all of these big blobs here are blockchains and then the bits inside so we're looking at a design here that has the settlement layer is a blockchain that's the thing that's actually been deployed right now so what we have running right now Codano SL stands for the settlement layer and that's a... the philosophy is that it's supposed to be relatively simple no crazy radical design decisions no complicated smart contract platforms should be simple, reliable trustworthy etc and then we can have multiple and we are indeed planning to have multiple other blockchains that are connected in some degree but still isolated to some degree so that yes, in particular one can collapse without destroying Asia on the settlement layer so yeah, we're proposing to have at the moment three and the difference is what kind of smart contract execution platform they provide and all of them provide something that is a bit more you'll see that there's Plutus Core in two places here but there's Plutus Core with more features turned on essentially so the one the Plutus Core down here which in fact is already deployed in the existing system but no one actually uses it yet is very constrained version Plutus Core whereas the one up there is a much more expressive and enriched and less constrained version so okay, so I mentioned KEVM and ELA so there's two different blockchains each running with those two systems and then the other one using Plutus Core so that means that of course that covers Plutus and Marlow the two languages that compile to Plutus Core and then the other two deal with Solidity because Solidity compiles to both of those platforms and then other EVM languages that you've probably heard of will also compile to those two platforms at least if people write the compilers the KEVM one, you can reuse the existing compiler ELA will be some work to implement compilers but it's kind of like LLVM-esque so it should be relatively easy for people to write compilers for that platform and then these arrows connecting it this is the side chains stuff and this allows you to move money from one system to another so the idea is that you might keep some assets on the settlement layer where it's nice and safe move it to your KEVM layer convert it into an ERC-20 contract do your crypto-kitties stuff buy and sell your crypto-kitties and then move your winnings back to the settlement layer where if something terrible happens to EVM and that whole thing gets taken out your asset, your crypto-kitties are still safely stored on the settlement layer and that's what this multi-currency thing is about Ethereum has multiple currencies sort of by having smart contracts but we're doing both that's the current plan both native assets which are basically labeled things like ADA ADA would just become one label amongst many labels in a UTXA model and then we additionally have the ability to have smart contract-based assets or currencies as well but the smart contract ones only live at the SL layer but you can still have corresponding just labeled currencies at the settlement layer so that's the concept so it should give you the same sort of feature set as like the EVM or Ethereum does but in a much safer way in a more extensible way we can add new CLs and kill off old ones so we can experiment with things without endangering the platform other things in the pipeline of course full decentralisation with state pools which everyone's jolly keen on hardware wallets light wallets, mobile wallets are all in various stages of thinking and design and implementation the SLCL split which I just mentioned that's coming along multi-currency which I just mentioned multi-currency ledger so these are other big features and of course the smart contract stuff and then the stuff which I'm sort of just labelling is nice technology which of course will have practical consequences I wanted about it because it's just nice technology so we'll have a new high assurance Auroboros pray of implementation that's one of the things that I've gone on about in previous video talks and we're going to have a new networking layer which is a very major project actually which you can ask Neil about afterwards unless you want to give us a 30 second interview a 30 second interview is IP and why do we need it for Qadana on all aspects of how the internet protocol IP, TCP basically it's not fit for purpose the internet doesn't work the internet doesn't work the way we want it despite you thinking it does work he's telling you that it actually doesn't I know all the people who go grey and get very because it doesn't work properly and there are things like happened with the fact that you can use BGP to you do realise $17.3 million dollars worth of Ethereum was stolen yesterday there's a few nodding heads they were stolen because somebody could just half inch the routing to the DNS service and why can you do that because they didn't think about how names of addresses work together in the 70s and actually it comes down to even more prosaic versions like that there were people who wanted to make their patents pay and they basically made sure all those things happened there's a whole bunch of skeletons in cupboards so why is that relevant for Cardano why do you need that Charles talks about trying to reach 3 billion people you try thinking about how you could take what we currently have and reach 3 billion people with a million act or 100 million active wallets you can't actually there isn't enough actually it reaches issues of causality and issues of how you can convey information Rina gives us an opportunity which is Rina stands for Recursive Internet Architecture it basically is only one layer which you can recurs it gives us ways of solving some of these issues like no hegemony in the system one of the interesting things that we may even be able to put telcos out of a business process because if you can use the block chain that's what Daniel really wants to do I mean I've got it, it's being recorded because there are customers there's my customers running away already but there are a whole bunch of interesting issues here that will basically allow us to reconstruct the system reconstruct the way we communicate with each other and move a lot of the hegemony that currently exists in the telecoms industry and get rid of there's so much waste in that trend but why for Cardano? why for Cardano? I think I'm going to come down to Cardano in space here if we're going to hit 3 billion people we need to have computation spread out in the system so that we can actually manage some of the communication densities and reduce them and you just can't do that in IP there's lots of things like mobile edge computing and all the rest of it additional elements that help us merge bits of the state and bits of the chain anywhere and one of the Cardano spaces we're looking forward to low-earth orbit satellites with Cardano nodes going over the top of the pole where basically you're carrying the value to people who need it so the reason we can't do this we can't actually get in IP, we can't get things like hiding who's where making sure we can manage the deep the thing that Lena has is that we don't have DDoS attacks it just doesn't have them because of the way the nature of the dressing works so there are a whole bunch of risks which are state actors criminal actors fed up kids inside basements that exist which are armed by the current internet which we've got to get rid of to build a consistency so for example if you have an IP address then you can be DDoS and that may seem like an obvious point but that's a fundamental flaw in the way that IP is designed and that's a problem for us because we're trying to build a decentralized system that is robust and resilient to DDoS attacks but if you found the few hundred core nodes that make up Bitcoin and you can find their IP addresses there's not that many of them and then you attack those you can take these kind of systems down and so if we want to not do that that's actually very hard in IP we have to do things like hiding your IP address or having sharing it under shared secrets the partners are going to use it know it and then if somebody looks over your shoulder and sees it you've had it we're having to deal in a whole bunch of risks and hazards that you don't have to deal with or an example you might say well let's put all the core nodes which are run by different people, different organizations so it is distributed but there's still not that many of them you can say let's put them all in a virtual private network which is not on the real internet which sounds good you separated out but then you had to do that in an open distributed way you had to allow anyone who had steak to join that thing so this kind of thing becomes rather tricky in IP and the idea is that Rena will map very nicely onto this kind of nested networks and so it'll let us do let's give proper DDoS protection it'll map more nicely onto the structure of what we're trying to do and in particular it'll let us do things there's a feature that we want for the future which is doing off chain computation where different people their wallets or their applications like a gambling game or something like that they want to be able to talk to there's a thing called multi-party computation where you have your people who are playing a poker game they all want to be talking to each other directly at high speed so you can't go by the blockchain the latency is too long and the data rate is too high so you have to have those people talk to each other directly it's a secure way where they don't know who each other is and one in which you can have assurances about North City Lake guarantees so with the system we're looking at with Rena you would just spin up a new virtual private network or the Rena equivalent of that and have those six people who are going to play poker with each other join that virtual private network and you can do that just like that they play with each other they know the names within that network but they don't know each other's IP addresses or where they are and then you tear it down at the end and that's not a thing that we can do easily at the moment and that's something that will Rena will give us so yeah that's one of the future features is this multi-party computation stuff off-chain computation and that will really depend on this technology right I think we've banged on about that enough we've probably run over we can take some questions I think on this and then we'll move on to the panel session okay so these are some of the questions that have been submitted an opportunity for you guys and girls to ask some questions as well question number one I noticed that in other crypto communities there is a lot of engagement between the founders dev teams and the community when big decisions are being made Khadarno not so much generating ideas and proofing them is left to experts, often just one expert as someone who has followed economics for 20 years, it's a long question outcomes predicted by experts have proved to be dubious do you really think this approach will outshine the wisdom of the crowds so I think they're asking is can there be more collaboration that's the short answer question it's a difficult question because basically our whole government is based on blocking experts and give basically large number of people who are paid for their expertise right okay so I think there's experts and experts so if you want to cut cryptographer to do cryptographic work I don't think the wisdom of the crowd is the right place to go because they will just be fooled by my grandmother but totally believe that if she'd thrown a dice five times and six had to come up it would come up next time and there's nothing I could do to the suede when it comes to issues of things like treasury and how people behave as crowds and all the rest of it we are constructing models to try and evaluate that and taking as much input as we can from as many people so there are actually work being done in this area I think agreed perhaps we could communicate it better so I would say yes and no so no in the sense that we need academic experts and some of this mathematics and all that kind of stuff and you can't crowdsource that very easily but what features are important what should we be doing absolutely and Charles is on Twitter all the time and people are pestering him contributing useful ideas all the time I think there is actually quite a bit of discussion that goes on at that level in terms of features and what's interesting and what's useful but I think on some of this more technical underbelly it's quite hard to do that to do that in a collaborative way I think maybe I'm ivory tower and whatever but I think there is a value to expertise there these specs are all available online they're there to be criticised if you come up and criticise them in a nice way we'll properly employ you that's the expertise we're looking for we're looking for people who spot our mistakes also maybe to mention like metric is currently operating in a federated mode so group of collaborators working on improvements are kind of closed but with Shelly we will have decentralisation and we should propose updates to the network collaboration will be definitely different also it's not really one expert at any time we have experts but it's groups and they discuss it's not really the thing that there's one expert who would decide on anything that's very healthy and lengthy discussions on how to do things properly okay some good answers there so on to the second question pretty good so the second question is regarding so Daedalus so this is from Mark Antwon from the Meetup event page I don't know if Mark is here today there's Mark hello Mark thank you for your question are you ready for the question Daedalus is too slow I'm going to expand it's too slow to encourage the use of ADA as a payment mechanism or maybe that should be a question shouldn't a light wallet or even a smartphone wallet be moved forward on the roadmap more generally ADA is a great cryptocurrency and the settlement layer is fast but more attention to the user journey earlier on might be appropriate any comment yes hold the microphone to your mouth it's going to be that close you go first okay so I think most of the things related to this question today first of all Daedalus is running with full cardano note in the background and we had some issues there so that's why we are writing it from scratch he also mentioned that we need to have light clients and mobile clients on the roadmap so we are attacking that on multiple fronts and we will also introduce some optimisations in cardano note then go back and before we release the new wallet back end for example the thing Duncan mentioned the dinodes and not the most optimal usage of files and storage of blocks that is something that is actually making restoration slower and stuff like that so quickly we will like improve performance of the Daedalus so I think part of this issue is how you get to sync quickly and there are some developments and how you get to be up to date with the current state of the blockchain so you can spend your money and there is work being done on not just currently this is built to be as robust as we possibly could have thought of and in general we start all the way from the beginning and work up to the present day and when you pick it up again you are picking it up from where you last put it on there are developments in the pipeline to make that A faster but B and also not necessary to go back too far and there are but those are actually developments in cryptography not just developments we are working on some of the science to sort some of that out so that we can actually elaborate on that so we are attacking this problem on free fronts one of them is just making that block syncing much faster let's call them expedient decisions engineering design decisions that were made early on which have consequences now that the performance is embarrassingly bad and we can make those better so we can make the block syncing a lot faster without radically changing the design just by doing various engineering things in a different way then there is also work underway for mobile like clients you don't download all the blocks in fact you don't download any blocks which relies on a not entirely trusted server or a not trusted server but it maybe knows a bit about you but you don't have to trust it it can't spend your money in particular and so that will allow a mobile or other kind of like client which is very very light you never download any blocks at all you just get your current balance and you can sign transactions on your own device so that's in progress and then there's this other thing which is traditionally called a light client and this is what relies on some advances in cryptography which our academic researchers are working on which is where you don't have to rely on any server at all but you still want to be able to get like the blockchain you don't want to have to download all the old blockchain you want to somehow magically get to establishing trust in the chain now and get its current state now so we've started from the beginning and that relies on some new coming crypto magic which isn't being accepted for us which is in there's papers being written about that at the moment I think they just get we don't want to talk about them because they're we don't want to steal their thunder so the answer is yes all those things are being worked on and I don't know how that's reflected exactly in the roadmap but there's a lot of effort going into exactly that issue because everybody knows that it's embarrassingly slow to sync thank you guys so next question is from Stuart Gallica who is on the who should be here today maybe Stuart's around hello Stuart, thank you for the question so in terms of how tokens are to be built on the Cadano platform will the user interface be simplistic i waves or code based, i ether and etc have we, I don't think we've really well I mean so maybe I can answer but also perhaps Darko the answer is that we don't entirely know exactly what user interfaces will look like at this stage it's slightly down the road but we are planning to have an application platform where you can deliver both user interface and logic that runs on your own wallet, runs on your own machine and interacts with code that's running on the blockchain itself so we're trying to have a better development experience in Ethereum where Ethereum says here is Solidity and then you've got to go and write lots of JavaScript to interact with that, it's not an integrated development platform and then you can write your UI with that JavaScript so we're trying to aim for something and this will be further down the road we're at very early stages of the gooey aspects of this platform for your own stuff so the initial releases that you'll see with the smart contract stuff will be you interface with it with code and then on there will be a platform which will allow you to deploy to users code that runs on their machines so long as they are happy with you they're happy with that which itself is a whole nest of worms so yeah the answer is slightly too early to tell but we're aiming for a much more integrated platform than what you get with Ethereum do you want to add anything to that? Darko? I think you've already had too early to tell we are calling a database application platform and we'll be sharing more information so thank you guys so the next question is from Robert hello Robert how and by who are blocks validated will there be incentives for validators? Philip go on currently all the nodes in the network validate blocks that they get and the crucial thing is that the next slot leader will have the state of the last block and will have it validated and then the incentive is for creating the next block we don't have special validators although they're in the paper yeah in the paper they are but in the current implementation we don't have them the summary is that right now every node does it every node validates every block which is partly why some of it is rather expensive but maybe the question is also about in the future once it's properly decentralised what are the incentives going to be and the answer is we have a whole group of people looking at the game theory of the incentives so yes there will be incentives and it's all very subtle and tricky the game theory to make that all work out robustly one of our colleagues has been writing lots of simulations and other people have been doing the game theory but the general answer is that yes there will be incentives for being a slot leader not too many sorry? there'll be incentives but not too many because you don't want people to just take over so you've got to actually disincentivise people to get too big as well actually that's one of the interesting things so one of the problems that you can observe with bitcoin is that more or less there's only five the equivalence of slot leaders five like mining pools or whatever and that's not very decentralised in some sense it depends on your definition but it means that there's only those five those five have control essentially they can control where the chain goes they can control the rules they can effectively do the equivalent of voting on protocol updates and so we want to have incentives with bitcoin are to have that kind of agglomeration and we think that's not such a good idea and so the design of our incentive scheme is trying to make sure that there are a decent number not too small, not too big just right or a range of Goldilocks saying of a reasonable number of nodes that control significant amounts of stake so yeah there will be incentives and that's what all this stuff is about there will be a cap on those stakeholders a cap, a cap in what sense it gives set that with larger pools because they control but the design of the incentive mechanism means that that outcome is very unlikely because people are going to move from delegating to the state pool to that state pool to think that they'll be better for themselves in such a way that the system balances out to being not a large number of stakeholders the situation would be if you were in that situation it would not be a Nash equilibrium because I would go I'm getting really poor returns in that massive state pool I'm going to move to this smaller medium sized state pool and I will get better returns and so people will do that and the size of the state pools will even out and that's what the game theory that we're working on tells us Hey we've got a Nash equilibrium in Okay so coming on to the last questions there's two questions I'm going to merge into one here so this is about getting involved with you guys from a sort of development point of view and around GitHub so are there any active issues to work on with differing levels of complexity or are these mostly closed to the public and sort of partly as well is it possible to set up at least some reaction system for issues opened on GitHub so most of the time as a community posts just nice to have features in there sometimes it's real bug reports and they occasionally well they don't get any attention one such issue for example is about a really simple gooey bug that may or may have lead people to sending wrong amounts of coins so it's really around sort of how people can interact more and especially around GitHub and help you guys out Yeah I think that's the thing that we will have to improve from it right now we are working on the code base and we know where we're working on but we're not taking too much input right now from the community and we're going to change that in some way but right now it's it is open source but it's not yeah it's not very welcoming to contributors at the moment but that's the current thing Do you have an internal bug tracker that you're using? And that's got all the embarrassing details which we don't want you to see because it's embarrassing Yeah Awesome One of the problems is closed issue tracking system which we are planning to open in the future We also are hiring two people who will actually manage the community so I don't remember the name of both roles but one is developing community manager which is actually a person who will be taking care of developing experience, interaction with the community and stuff like that so things will definitely improve in the future Are those community managers or IOMHK community managers? Do you know? I'm not sure I gave you a facetious answer Charles is very keen on things being done openly and in the end being a proper open source with proper engagement and we're not at that stage yet and as he said need to improve and there are hirings in the works to help us with that At the moment we get bug reports mostly through user interface rather than taking them via github because most end users don't know github but yeah there are plans afoot to improve that and indeed including having a fully open bug tracker we will still need to have some things that are secret like what are we planning to do next or some security things to deal with internally initially but the goal is to be much more open in the end but yeah we certainly admit we're certainly not there yet