 Go ahead. A single repo up as a placeholder while some of the tooling discussions work their way out or push the existing three to four repos that we have in their current state. We're just working through some of the logistics on that. Okay. I just think that sooner would be better than later. But... Increase. Yes? This is Patrick Holmes, formerly of Intel. I did send those requests and I copied you on them. So I sent requests to get a repo to Mike and to Phillip earlier this week. Yeah, I'll follow up. I think it got pushed to Andrew. So let me see what we can do there in terms of expediting the proposal there. All right, Chris, I see that Emmanuel joined. That brings us to 8 of 11. So we've reached Quorum at this point to proceed. All right, Dan, I guess I'll follow up with you and Andrew and see if we can make some progress. Yeah, that'd be great. Thanks. Yep. All right. Thanks. Is Vitalik on? Yes, I'm here. Awesome. So we're at Quorum. So we'll call the meeting to order. And so the first topic is a presentation by Vitalik on the Ethereum technical stack and roadmap. And I think he's got a chart or two on some possible ideas about collaborating with Hyperledger. So Vitalik, I'll turn it over to you. Okay, thank you. Yeah, unfortunately, I can't join through GoToMeeting because it doesn't work for me on a bunch of internet connections. Not great, but for those who have the slides that I passed around by email, I feel free to follow along. So this is Andrew Keys. Should I have you made sure and then I can show the slide? If you want to, sure. If you bear with me for one second, I can pull up the actual slide one second. Perfect. For the slides distributed, I don't see an email from anyone. It was a PDF that I distributed to at least the people that were in the email thread with me, which probably isn't everyone. I'll post it in the Slack, guys. And Todd, if you can make sure to post that latter one because that latter one has the Hyperledger integration slide as well. I'll do. There were two sets of slides. Todd, are you going to post it or...? I'll get it posted to the list now if you post a Slack. Okay, I'll drop it in Slack. So can everyone see... Yep. Okay, great. Okay. So shall I start? Yes. Okay, so onto slide two before Ethereum. So in general, when I describe Ethereum, the thing that actually found the most helpful is to start off essentially by describing the problem that I originally had in mind back when I first came up with most of the core ideas back in the November 2013. So this was the time when people were starting to realize that there were applications for blockchain technology going beyond just moving coins around. And people were coming up with protocols like namecoin, covered coins. I have Swiss Army Knife protocols now there. So what I mean... What I refer to by that is this idea that after things like covered coins, people started to realize that there might have been like 5 or 10 or 15 different blockchain applications that people might want to do. And so people would come up with protocols that had maybe 15 transaction types and one type for each application. So you'd have a specialized transaction type understood by the product. Entering into a financial contract with some specified lovers, some specified strike price or whatever, you'll have another transaction type for a different type of contract. You'll have another type of transaction for registering some kind of domain or some other entry. And for every single application, you'd have some explicit support in the protocol for it. So the problem with that approach is basically that it's insufficiently general. So okay, you have 15 applications now. You create a protocol for these 15 applications. What if someone creates an application number 16? Then what do you do? Do you go back to the drawing board? Do you modify the protocol? Do you force everyone to switch to a different system and force everyone to upgrade? So it creates a lot of complexity. So the idea behind Ethereum was that instead of explicitly supporting some set of applications, we support a native programming language. So at the bottom level, you can think of it as something like C++, for example, doesn't have any kind of special structs and keywords for trade finance or financial settlement and clearing. But people still use it for those things because those problems get solved on a much higher level. So Ethereum as a base layer, it tries to be as pure a kind of programming platform as possible. And the business logic for any application you might want to build can be implemented in this programming language that's understood by the Ethereum protocol. So one of the ways that I sometimes exploit in the kind of abstraction is if you look at the role that transactions have in all these systems. So in Bitcoin, a transaction can do one thing. It can send x-bit coins from A to B. In the Incoin, you have a couple of types of transactions. One of them is register domain x. Another one is if you own the domain x, you can set the IP address of domain x to y. In Ethereum, it's essentially call, kind of roughly speaking, I'm kind of modeling together the low level and some of the higher level things here, but call function f of contract c with argument a. Now, of course, what exactly do these functions mean? What do the arguments represent? What kind of state does the contract store? All of that is basically up to each individual application. So next slide. Go ahead. Oh, sorry. If you're just getting feedback. Okay, so the next slide is I talked about the Ethereum account model. So a lot of the time when people talk about the EVM, what they sometimes don't understand is that the EVM is just, it is a virtual machine. It does have quite a bit of functionality inside of it. But if you actually want the full benefits of using Ethereum, then quite often you don't just want the EVM. You also want these other Ethereum components that to some degree feel kind of more, like people don't often think about because they're just there and there isn't really a cache name for it. And that's essentially the way that accounts work, the way that transactions work as calls between accounts, kinds of things that contracts can do, a notion of code, storage. So a lot of, in general, if you see Ethereum applications, there are Ethereum applications that are just one contract. And those contracts, you know, they require the EVM and they also require this object called a contract, which actually keeps track of some notion of storage, which maintains some notion of state. So state could even be, you know, it could mean a lot of different things. And in one of the later slides, I'll actually go into how people use contracts in lots of different ways. But the most interesting applications often even happen when you have many contracts that serve different roles and these contracts are talking to each other. So it's not just about the EVM, it's also about this kind of framework that exists around it that allows those kinds of things to happen. So in general, the way accounts work in Ethereum is there's two types of accounts right now. So I have a note there that, and I'll talk about this at the end, that in the future we're actually going to move toward a one-account type model, but right now there's two types of accounts. One of them is in what we call an externally owned account, and this is an account that's controlled by a private key. So what does control mean? Essentially it means that if you have a private key, then you can create a transaction. If you use ECDSA to sign the transaction with your private key, then if that transaction gets included in the block, it's interpreted as a message from that account with the value data, gas, whatever other parameters, to whatever address the transaction is going to. So if you want to create a message going from one of these externally owned accounts to any other account, then the only way to do that is a transaction. Then the second type of account is a contract. So a contract is an account which is controlled by its own code. So I'll jump between slides a bit. In the next slide, number five, I have talked about some of the parameters that transactions have. So this isn't a complete list. Transactions also have parameters that are around things like gas price, the ability to send Ether, and those other features. But the reason I'm omitting them is because in general, Ether doesn't tend to be very relevant to private blockchain use cases. Now there are some situations where, like we've talked to some consortiums where they actually essentially repurpose Ether in the protocol. So they might give individual users a budget that says you have the right to spend up to a billion computational steps a day. And sometimes they actually use Ether as a way to represent that, but that's not something that you have to do and it's not really critical to the model. So the parameters that are important are, first of all, the gas limit. So I'll talk more about this later, but one of the key innovations that the Ethereum virtual machine has is this notion of counting computational steps. So a transaction specifies the maximum number of computational steps that it can take. It specifies the destination address. It specifies data. And it sequence number of signatures. If all transactions go from externally owned accounts, the signature determines which EEO it goes from, and the sequence number is just there to prevent replay attacks. Now a transaction could go directly to any account. So there's three types of destinations. One of them is an account that doesn't exist. The other is just another externally owned account. Those two cases aren't really interesting, and with the existing code, all of that stuff basically moves these around. But the most interesting case is if the destination address has code. So if a transaction goes to an address that has code, then the code runs. And the EVM is the thing that actually interprets the code. So the code has the ability to do quite a few things. So the main ones are it has the ability to read and write the storage. So storage is a kind of key value database. Actually, I talked about it on the previous slide too. Key value store right now, these are 32 bytes. Values are 32 bytes. But for efficiency reasons, basically in order to kind of minimize the Merkle Tree overhead, we're actually moving to 32 byte keys on limited size values in the future. And only the, every contract has its own storage, and only that contract has the ability to read its storage or write it right to its storage. Now, one important point from a security perspective is that just because from the point of view of, you know, this sort of object oriented model, we get only a contract can read its own code. That doesn't mean that, you know, outside entities, so just people reading the blockchain or, you know, running the code, running a full node, they all have the ability to see everything as well. So don't make the mistake that some people already have made of treating this idea that only contracts can only read their own code as a privacy feature. It's basically more of an kind of object orientation. What's the word? Like information hiding features. So accounts have a 20 byte address, so every account has that address as kind of identifier. Then, so if you send a transaction to an account, the code runs, the code has the ability to read and write the storage. And the other thing that the code has the ability to do is they have the ability to send these things. We have a few different names for them. We sometimes call them subcalls. We sometimes call them internal transactions to other accounts. So the idea is that these things look very similar to transactions. So if, you know, I send a transaction, I'm account A, I send a transaction to account B. Then account B sends a transaction, or then account B's code runs and it's this opcode that sends, you know, something to account C. Then account C is going to see the exact same thing that if it saw if account B had actually been an externally owned account and if it had authorized that message for the transaction. So in general, we try to follow this principle that externally owned accounts and contracts have sort of the same privileges, the same ability to do things and interact with other accounts. Now, one property that these internal calls have is that they also provide a return value. So they're useful not just to command another account to do something, but also potentially to get information from a contract. Next slide. So accounts can be used for multiple functions. So this is, so a lot of the time when people talk about the value of a theorem and, sorry, I didn't hear that well. I think that was someone's background. Okay, so accounts can be used for multiple functions. So one of the things that people talk about regarding the advantages of a theorem, and this is actually particularly true on the public chain though I think on private and consortium networks that's true to some extent as well as this notion of synergy. So the idea is we try to make it very easy for different contracts that live on the same chain to interact with each other. And I have a few examples here of what these accounts can represent. So one example here is the specifying account and access policy for an individual organization. So in general, accounts are the kind of primary identifier for both contracts and in general kind of actors inside the system. And so one thing that you could do is you could have, you know, some accounts that represent some organization and it might have, that account might actually be the account that either owns assets, has a relationship to some kind of contract, has the ability to potentially could even use this for as simple a use case as kind of keeping track in the sort of cryptographically logged way of resolution signed by some, approved by some organization. So one thing that an account could do is it could specify an access policy. And the way that you would do this is you would do what we call a forwarding contract. So the idea is you have a contract and that contract can accept proposed internal transactions as mess. So in transaction data, that account can accept a proposal for operations that it can make. And in this case, I have an example policy where someone can create a proposal and if that proposal spends, let's say, less than 100 coins per day, then any one of those keys can immediately, can make that proposal and that proposal immediately gets executed. And what we mean by executed is that that account internally essentially forwards the message. So whoever proposed the message specifies the destination, specifies the data, specifies all the parameters and a call with those parameters gets created and sent by that account. So I have a policy here which is actually fairly complex and that it says any one out of five are enough to spend up to 100 coins a day, but if three out of five approve, then you have kind of unlimited freedom to do anything. So the idea is this is like one example of a policy that, you know, some organization that holds assets on an Ethereum system might have, but you have – because this is all just written in programming code, you have an extreme degree of flexibility in terms of what kinds of policies you want to end up creating. Another one is maintain a database of who owns how much of an asset and a process send operation. So this is pretty similar to use cases like, you know, covered coins and all these other blockchain-based asset issuance schemes. So the idea is that a contract keeps track of who owns. Basically, you have a contract that actually represents some kind of blockchain-based asset and contract storage is used to keep track of how much of how much each person has. So the key value map is basically a map of, you know, address to balance. And the contract has – if you send a data to the contract, then it interprets that as a send operation and it reduces the sender's balance by some amount and increases the recipient's balance by some amount. So the most naive version of this – basically, as I've said, it implements a kind of covered coin scheme in that you have these different actors and they have the – or you have this contract and these different actors that all have balances inside of that contract and the contract itself sort of plays the role of being a kind of token on the network. But the nice thing about, you know, being extremely flexible and syring employee is that if you have any kind of special needs – so for example, the needs to restrict ownership, the actors that have been verified in some way or the needs to have balances that are, you know, non-transferable and for some period of time or anything like that, then that's something logic that you can implement very transparently without affecting the underlying interface. Then we have – another one is a thing from that account that's going to be used for is specifying the agreement between multiple parties. That's what the funds between them based on some conditions. So this is – this and the next one, escrow, is, you know, what people often talk about when you think about smart contracts. But in general, the larger point I'm trying to make is that smart contracts actually have the ability to – or in general, watching-based code has the ability to support all of these other different functions and – but it can do these next ones as well. So you can have a piece of code that says, if X happens, then send a message to this contract that represents this token and tell us to send 200 coins to this address, otherwise tell us to send 200 coins to this other address. Then one of the last use cases is the storing data that can be queried by other contracts. So one example is you might imagine some entity store – it could be a KLIC provider or something similar that stores data information about, let's say, which accounts are – what's through some certain level of authorization. And so there's authorized users of some particular system. So now we get to the Ethereum virtual machine. So in general, the Ethereum virtual machine is a virtual machine. The interface that it follows has a lot of parallels to other virtual machines. So in general, the inputs are pretty simple. It's code, data, and – I also put environment variables there. Environment variables are actually technically externs, though in some way you could consider them an input as well. You can kind of write the implementations, have them work both ways. And environment variables include things like the walk number, time stamp. In Ethereum's case, mining difficulty and a bunch of other variables. But theoretically, you can kind of hook this up to any kind of environmental data that you want contracts to be able to see. Then we have externs. So there's a few of them here. So there's operations, reading and writing for writing storage. There's making subcalls or internal transactions. There's making logs. It won't have time to go into logs too deeply, but you can think of it as a kind of easily verifiable proof of existence entry. So it's a log – it kind of gets stored on the blockchain, but it doesn't get stored in the state. So other future contracts can't actually read it. Now, theoretically, many different virtual machines can kind of be modifying and hooked up to the interface. So you can – if you really wanted to, you could use the Ethereum account model without the Ethereum virtual machine. So now the next slide, EVM requirements. So these are the requirements that we want that we have. Can I ask you a question? Yes. So all the contracts run on their – individual contracts run on their own EVM? Okay, so in general – Well, it depends on what you mean by one EVM versus multiple EVM. Like there's one EVM protocol, right? Then when a contract calls – or when an account calls and sends a transaction to a contract, that spins up an EVM execution instance. And if, you know, in the middle of that EVM execution instance, that contract decides, you know, it's going to send someone – send a sub-call to another contract. That spins up another execution instance. So you could also have kind of a stack of execution instances that if you have a series of contracts that are sending calls to other contracts. Oh, we have something similar over here, which are Docker images. So how fast is your spinning of these EVMs? How fast can they spin? I think it's very fast, though I haven't really made the statistics yet. So the EVM in general is designed – is designed to be kind of extremely lightweight and usable for this kind of turn it on, running for 100 cycles, turn it off type of use case. So EVM requirements. So this – so these are the very specific needs that we had when we were designing these here on virtual machines. So we'll just go through these one by one. So first one is the small code size. So if you look at, you know, something like C++ – even the simplest C++ program, if you try to compile it with default settings, it can pass to something that's over 4 kilobytes. So – and four computers, that's fine because, you know, we have gigabytes of storage and 4 kilobytes of inconveniences, not even all that much. But on blockchain, you have contract code from potentially thousands or millions of users, each one of them might have multiple contracts. And so space efficiency is really important. So basic contracts on the EVM, a lot of them have code which is less than 100 bytes. So this is one of the important requirements. The second one is that virtual machine security has to be designed around running untrusted code from arbitrary parties. So this is fairly similar to the JavaScript use case, basically, you know, on the public VRM network, which is the original use case, literally anyone can send code and everyone has to run that code regardless of, you know, who sent it. So if the person who sent it is a bad guy, then you have to make sure that the VM is secure enough that you can't sort of escape the sandbox in some sense. So in general, I mean, we've accomplished this with a combination of strategies. I mean, so the EVM was designed from the implementations that exist right now were designed from scratch. So there, you know, you literally have code that interprets code, processes each individual opcode according to the way the spec says it should be processed. It's a fear. What the EVM can do is extremely limited and the only externs that it has are designed around things like storage and logging, very small number of operations. It has no direct access to memory, for example. So, you know, it talks to an object called memory, but memory itself is something that's kind of managed by the interpreter and it's kind of an expandable by the way. Aside from that, so in general, small attack service that kind of being specifically designed for the task, we've gone through many security audits. Then the next point is multiple implementations. So, if you... So, Ethereum is... Everybody is talking. So Ethereum is fairly unique in that it has about eight implementations of the protocol now in most, and the majority of them are actually primarily maintained by the community. And we did this for a couple of reasons. One of them is just simple cross-checking. So, when we initially created Ethereum and when we used your hard forks, generally we built tests and those tests are created by one client and then we make sure that all of the clients pass them. If even a single client doesn't pass them, then we figure out what went wrong and we try to fix the situation, make sure that it stays in consensus. And the other reason why we did this was to mitigate developer centralization and this was a concern for the public chain particularly because if you look for example at the situation, at this point you have kind of one development team that controls the code base that everyone uses and this creates a group which is even more centralized than the miners at this point. And that's something that we saw early on as a concern and we wanted to take this other approach in order to mitigate it. So, from a philosophical standpoint we have multiple limitations. Also Ethereum does have a formal specification or well I shouldn't use the word formal loosely I suppose like it hasn't been formally verified or run by a computer but we have a specification that's called the yellow paper which is described in fairly mathematical language of how every single piece of the Ethereum protocol works. And the idea is that we don't really have the philosophy that some other projects do that sort of the code is a spec in some sense. Like our approach is the yellow paper is a spec and it's up to and in some cases even the yellow paper is wrong if the intent is clearly wrong and it's up to the clients that implement the protocol phase flight. So next requirement is perfect determinism. So basically operation or have the same inputs and the same external results have to always lead to the exact same output and the reason is that this is running in the middle of a consensus architecture. So everyone has to precisely agree on the results of every single call. Now the next one that we point that we have is the infinite loop resistance. So we don't want untrusted actors to be able to create code that's just as wild true and do some expensive thing and just keep running forever. Now a lot of languages have some notion of infinite loop resistance so if you open up a web page and you have a JavaScript that runs for a long time and then the tab just crashes and in these days it's fairly graceful but in a consensus architecture the loop resistance itself has to be accomplished perfectly deterministically. So decided from the start the timeouts are going to work and this gets even more complex once you start talking about notions like having subcalls and then where each of those subcalls has its own limit on how much computation it gets assigned. So having perfect determinism at the VM layer is a very simple approach to solve all of those issues. So next slide I talk about on metering in gas. So this is the way that we solve this problem in a theorem. So the approximate idea is to count computational steps but in reality different operations have different gas costs and these costs incorporate runtime, consumption of memory and storage, consumption of bandwidth, pollution of the bloom filter. And the way that this works is that if a transaction or a subcalls or if the code execution runs out of gas in the sense that it was assigned 40,000 gas and it burns through all of it then that operation is fully reverted so it preserves that emissivity so if you write code you don't have to worry about what happens if some attacker manages to execute only the first half of the code and not the second but the gas is still treated as fully consumed and that prevents denial of service attacks. Now an important point here is that this transaction subcalls stuff is on a kind of per call level. So if you have a transaction that specifies a million gas it can create a subcall to some address which will say 40,000 gas and if the subcalls burn through all the 40,000 gas then the subcalls gets reverted and it replies back with instead of returning data it returns a kind of failure code but even the parent execution instance still continues running so contracts when they send messages to other contracts they do not have to sort of trust those contracts. Questions? Yes. Does this model work differently in private chains where you might not have the same economics with Ether? Okay, so this is actually a good point because the question that a lot of people raise is that the notion of gas is not dependent at all on the notion of Ether, right? So gas is purely a metering technique. So the way that this works on the public chain is that a transaction says, you know, I have a million Ether, or sorry, I'm willing to spend up to a million gas and I'm willing to pay, you know, 0.0005 Ether per gas for that and then the miner, it stops the miner to include or not include that transaction but first of all, once it's established that the transaction is included then, you know, within the actual account model execution environment virtual machine there is no real sort of link between gas and Ether and in a private chain you could amount, it depends on the use case so sometimes you might have contracts that are run by trusted actors and you might not care that much but sometimes you might want, you know, users to be able to provide custom code in which case you're going to want some notion of metering but you also are going to, but you do not need to use Ether as a way of economically rationing that metering so as an analogy, I would think of the gas limits in Ethereum as being kind of like bytes of transaction space in Bitcoin, right? So in Bitcoin, every transaction consumes a certain number of bytes and, you know, there's a byte limit of, you know, 1 million bytes per block so in Ethereum, think of gas and the gas limits as playing a similar role in that so in private chains what you would do is there's a few different techniques, right? One technique is you could say anyone can send as many transactions as they want and every transaction can only have a gas limit of, let's say, 500,000. Another technique is you actually assign people credits so you say, you know, you have the right to run a billion gas per day and if that's what you want to do then the simplest approach to doing that is actually to just piggybacking off of Ether so in your private chain you would still kind of use Ether but you would have a contract that sort of centrally assigns everyone, you know, a billion Ether a day and you specify that you only accept the application of one Ether or gas and that could be a very reasonable way of allocating computational resources and transaction sending or preventing a transaction over use. And also there's also a modality where we can have basically modularizable M of N, RAM Robin for private implementations as well so basically... Continue talking Yeah, sorry, so I think Andrew was just starting to talk about consensus algorithms, so this was brought up on the previous call that we had just a couple of people earlier but I am deliberately focusing this presentation on the Ether and virtual machine and the account model because those are the components that I think that the Hyperledger project is most likely to find interesting but in general we're moving in a direction of modularity and so even though, you know, public chain, mainline Ethereum uses its own perceived over brand of proof of work chances are like even the groups that are looking at using consortium chain Ethereum right now, they're generally swapping that out with like round robin consensus, some PBFT or, you know, whatever other algorithm. So I'm going to jump to slide 10 pre-compiles so the idea with pre-compiles is that some cryptographic operations are too slow to be done on the Ethereum virtual machine so in general the EVM does have some inefficiencies but those inefficiencies are completely fine for, you know, fairly simple operations where all you're doing is you're just dividing, subtracting, sending subcalls around but if you look at things, if you want to do inside the EVM things like elliptic curve signature verification hash verification of ripeMV160 and then what we do is we essentially provide native versions and the way that you can call those native natively implemented versions is by calling accounts at pre-specified addresses so if a contract makes a call or makes a subcalls account one then in the data of that sub you could specify, you know, the elliptic curve signature and the message would reply back with either zero if the elliptic curve signature failed or the address or essentially the hash of the public key if it was able to successfully recover the public key from the signature shout256, you send a data reply back with a shout256 of it, the ripeMV160, same thing number four, the sort of identity this is identity in the mathematical sense not in the kind of standard blockchain sense so you send data instead of sending the same data back and the only reason this exists is to facilitate efficient data copying so in general the idea here is that if you're going to have a private consortium chain then depending on the use case there might be some other computationally intensive operations that you might have so I was talking to a firm in London for example that was doing financial derivatives and they were starting to build a consortium if they were in a chain and they had needs around, they actually wanted to use a third party library to do things like computing the valuations of like leverage financial contracts in order to figure out whether or not you need to do a fourth liquidation event or figure out how much money each party gets and these calculations are fairly mathematically complex, they would be either too difficult or too much effort to implement in EVM proper so instead they're probably going to be extending the pre-compiled set of that's used in the Ethereum public chain adding your own pre-compiled address where you can call it and it replies back with the results from computing a function that's actually implemented in this library in Java so the next question here within the EVM believe that you have an operation called ECRecover yes so that does that okay so ECRecover is not an off-code ECRecover is a pre-compiled right? so off-codes are things where you know it's like a specific byte in the code and if the code runs over that then it does it and for example it actually is an off-code but if you want to do an ECRecover then what that actually compiles down to is basically a call to address one okay so it has all the efficiency of native execution so next slide is ABI so when I get near the beginning of the presentation when I described the Ethereum contracts as being about function calling so theoretically on the very bottom level that's not really true because there is no concept of functions all there is is just there are contracts that have code and you can send these into subcalls to them that have data but a lot of in practice most contracts have multiple functions that you might want to call so these functions might if you have something like some kind of registry then you might have you want to register a key set the value associated with a key if you have a currency then you might have send but you might also have methods like creating creating what we call a check and there's kind of specialized cases where there's sort of different kinds of sending that you want to do in the case of a financial contract or even like a blockchain based order book you might have a function for creating an order for creating an order a function for filling an order so the way that we meet this need is through this ABI so the idea is that the way that calling gets compiled down is that the first four bytes of the data to the subcalls are a function identifier so four bytes that get sort of randomly generated from the hash of the function parameters or sorry of the function like signature and then the remaining data represents arguments in some standardized serialized format so the serialized format is if we're talking about static sized arguments then it's very simple just 32 bytes for each argument so if you're doing something like the usual we send would have exactly 28 bytes in it where the first four bytes are the identifier, the next 32 bytes are destination address the next 32 bytes are the value if we start talking about including structs including dynamically sized arrays including dynamically sized arrays then the ABI kind of becomes certainly more complex but in general it's still fairly straightforward it's designed around this model of 32 byte chunks then next slide so high level languages so obviously developers don't want the program in raw EVM assembly so developers are going to write code in higher level languages instead and the compiler compiles it down to EVM code so the higher level languages include Serpent, LLL, the most popular one and the one with the most development effort right now is Solidity and there is active research going on regarding how to continue moving forward HLLs so we are starting to do work on the athletic form of verification in Solidity future ideas that I've seen raised by different groups unfortunately there hasn't been much action on them yet this will start in the next few months total functional high level languages plain old functional high level languages domain specific languages one idea I heard was a financial DSL that focuses on flows as a kind of fundamental unit so there is a lot of opportunity in designing different languages so I'm going to go down to EVM code and that's an easy system that we are quite happy to support and consider and help along I'll probably do this integration opportunity slide last so just to move a bit to future development so this is part of our roadmap probably the part of our roadmap that's relevant for you guys because something like CASP or the proof of stake algorithm although I will say that there actually are some consortium groups that like CASP and they are interested in using essentially a version of it we are kind of every member of the consortium is assigned one coin and the reason is that they like the kinds of tradeoffs that it makes between efficiency and finality so there might be more that might be even more useful than you might think at first glance so in general future development one of them is to merge the two account types so there would be just one account and the way this works in practice is that the protocol would no longer really understand the concept of a signature instead every transaction would start off with an initial call from some sort of standard entry point address and we are thinking from that address that it would be like 2 in the 160-1 so it's the highest possible address probably a few users once have accounts that are protected by private keys but in general all of this would be done with contracts so like every account would be a forwarding contract the contract code would specify kind of an account access policy so it would be a work fairly similar to the way that a multi-signal example that I described a bit earlier though in most cases it would be a bit simpler then so this is nice for a few reasons like it allows for more innovation in how access policies work and what people seamlessly move to other cryptographic algorithms so one very practical use case is let's say if at some point people decide to move to ED25519 or Lamport signatures or whatever then we want to set things up so that we don't have to coordinate that move ourselves on their own so if you're paranoid about the NSA this year then you can move your accounts to Lamport pretty much as soon as you want at least once we get this feature rolled out the next part is sharding so sharding is how we accomplish parallelizability so the full vision of sharding is designed around the public chain so this is the idea of being able to support potentially millions of transactions a second or in the initial versions thousands to tens of thousands even if every single node in the network is just a laptop and the way we do that is that assuming in general there is that high a load on the CRM network and it's going to be because there's a lot of computers on it so instead of requiring every single validator to verify everything about every block we randomly assign blocks to different validators and it gets processed in parallel with the public chain I think the important node is that in consortium chain use cases you can just require nodes to have powerful CPUs and lots of cores and lots of memory so you don't really need to do that kind of cross node sharding and there's a much weaker version of sharding that gives you same levels of scalability so the idea here is that we kind of restrict the effects of transactions in particular address ranges and this allows you to statically prove that particular transactions are disjoint and so they can be processed in parallel and then we thought if you want to perform some operation that has effects across multiple shards then we do what we call and basically an asynchronous call so the idea is you have one transaction on one shard which creates what we call a receipt and then another transaction on another shard which clarifies the receipt from the first shard so you actually have to take the receipt from the first shard included in transaction data going into the other shard and then based off of that you do your computation on the other shard so that creates an asynchronous programming model but the benefit is that it gives you parallelizability so these changes are relevant because if you want to build a theorem like scalable theorem based applications then you're going to want to kind of work around this model but this is fairly long term then the third one is increasing efficiency by implementing okay then the next one is increasing efficiency by implementing cryptography on top of the EVM so we have two different paths here so one of them is actually three different paths somebody who's, I'm sorry to cut you off but somebody on the phone is a new okay so we actually have three paths so I only mentioned two of them in the slides but one of them is just making the EVM more efficient generally the other one which in order developers are working on the other one is adding additional pre-compiles so I talk about like EC on EC add or two operations that you need in order to say efficiently implement ring signatures on a theorem then if you want to verify RSA and one of the use cases we've had for RSA is actually we've wanted to be able to verify just standard form certificates that are like the kinds that are used in mainstream identity applications that already exist outside of the blockchain lands right now anything to do with like internet security so we are like we're considering the idea of adding pre-compiles for basically big integer math so adding subtracting most blank device adding arbitrary size integers then we also have one person who's working on an experimental architecture so this is kind of like an EVM 2.0 so if we go this route you know we are going to have a fairly seamless upgrade path where you'll be able to kind of compile EVM 1.0 into EVM 2.0 but the idea is it takes WebAssembly as a kind of base and takes WebAssembly code and actually uses a kind of trans-compiler to add metering features metering to it so you would take a piece of code that has a piece of code then just before every branch condition you would reduce the amount of gas remaining by some amount and if the gas is zero then you would hit an exception so this is still in fairly early proof of concept stage it's one of the things that we're looking at the other thing that we're looking at is this more iterative approach which is continuing to sort of hammer at EVM roughly as it exists today introducing efficiency features for you know things like different sizes of integers but you know in general with a goal of increasing efficiencies to the point where the idea we should be able to implement any kind of cryptography on top of the EVM in you know very reasonable speed so the last slide is well the slide before 13 integration opportunities for HyperLudger so this is probably the area where I know a bit less about and if we want to go more deeply into this we probably need probably even more follow up calls I'd need more information about specifically the kinds of things and the ways that certain the architecture works at this point but you know in general there's a few paths so one of them is that you could look at just the EVM so you could keep the way that you're doing things exactly the same right now with regards to you know the account model but provide the EVM as an option essentially I think that's what you were calling chain code so if people want to create enter into agreements that are mediated by you know some piece of EVM code then you offer them the ability to do that but then one of the challenges is that you have to figure out are you looking at the EVM as just the kind of stateless thing do you want to map it to some notion of storage do you want to map it or even going to try to support you know any notion of internal transactions or subcalls another approach is to try to integrate both the account model and the EVM so try to or the third option that I offer there is kind of work on an account model that tries to you know take the desired properties that the EVM has at this point and try to add on try to come up with some architecture that essentially satisfies all the same properties so the ability for contracts to call each other the ability for like the notion that everything you have these kind of long term persistent objects that have identifiable addresses and see if there is some way to integrate that so that we get you we do not need so that you know all this sort of cross contract calling can actually happen so the other pieces of the EVM so you know things like the consensus algorithm and the network layer you're probably less interested in which is why you know actually I was trying to put the machine in the account model for this presentation so but general I would say that this is sort of stuff where it probably needs to take some time and I would need to take some time and understanding more of what you're doing in order to figure out the best approach so Vitalik if quick question if you think kind of the public Ethereum network could be like the next generation of the internet kind of using blockchain technologies of these three options what I'm looking for and I think the optimal path for hyper ledger is the ability to integrate you know private consortiums and private infrastructures and have the ability to interact eventually with the public ledger similar to how internet work with the internet what do you think the path of least resistance so in general I think that as the Ethereum does it particularly goes into the scalability direction and we start talking about things like sharding so sharding in itself like cross shard operations like they require a lot of very similar kinds of considerations infrastructure to cross chain operations and so I think over the next couple of years we're probably going to move to a model where the developer tools around cross anything operations are going to be kind of fairly smooth and easy to use in a lot of different ways so there's also actually multiple directions that we're kind of attacking this so one of them is sharding the other one is that consensus of working on DTC relay which is kind of a way for Ethereum contracts to read the Bitcoin blockchain and there is infrastructure being built around that so a possibility would be to kind of follow along with the progress on both of those then one could imagine a kind of standard some kind of standardized software that's like consortium chain relay and like what you could do is you could even just for some regular consortium chain regardless of how it works then you could imagine a kind of consortium chain relay package where on the Ethereum blockchain you would have the ability to verify the signatures of whatever validators were in the consortium network and so that if you create some transaction or some log or some state entry in the consortium chain then the public chain can verify it and then if you integrate the EVM and the account model in the consortium chain then the other thing that you could do is you can actually have the relay work in reverse so you can have a relay where the consortium operation for Ethereum contracts in the consortium chain can essentially read proofs that some particular thing happens on the public chain so once you have both of those then you basically have a complete sort of asynchronous programming model where you can just write applications that look like promises, JavaScript callback towers, whatever and they can pile down some things that work across most public chain so this is Chris so thanks for this, this has been I think really informative and I kind of like the thinking on this last about some of the ideas that we might pursue to move forward. I think certainly if you look at them from a progressive perspective the first stage would be to get the EVM somehow rather integrated what you're doing in Hyperledger and then we could probably explore what would take to get the account model to achieve some of the things you were just discussing so Andrew, we're having the hackathon next week, Thursday and Friday in New York I'm wondering is is there any possibility that we might be able to do a little bit of experimentation during that hackathon to see if we can absolutely, Vitalik how long would Vitalik be looking at consensus obviously the consensus with a Y personnel we're here in New York in resident we're ready willing and able to help this type of integration as you see fit Vitalik, what are your thoughts? Regarding me being there in person I'm going to be at consensus with a U and I'm maybe even suggest dates and we'll see if I can come in person whether to the hackathon or something else aside from that particular week then I think if so in general if you're in foundation it's quite busy on research development core so we're not going to be able to contribute full time developers to this but if we can start to figure something out then I'll be happy to have kind of more of these discussions and help provide guidance on the best way to move forward on any of these. Chris, you and I can speak offline we have a few developers at consensus with a Y that could probably help with this endeavor Okay, super, thanks Andrew Thank you very much, this was a great talk and I know I certainly enjoyed it and based on some of the discussion in the chat it seemed like others were engaged as well so I think this is very good, so thanks Are there any questions for Vitalik? Should we open it up for any questions? We're having an architecture group meeting on Friday next week, if Vitalik is available if he can kind of join and participate in that one of the topics we have teed up is how we evolve the architecture going forward Okay, so I would say send me an email with the time and I'll see if I can participate I am fairly busy with meetings and conference the next couple of weeks but the earlier you tell me the more flexible I am able to be in the earlier I can tell you if I can make it Cool, and we also have some developers that will be in New York that could go in his Steve that may not know the exact breath but couldn't step in Okay, super, thanks Andrew, I appreciate it Alright, so let's move on to suspense with the work group updates because there's not a lot of time and I know that there's concern about Is there anything else that I need myself for? No, but we'd love to have you next week at the hackathon to help with Perfect, so I'm going to drop off now and if you're free to send more information about any of that it's in my email Okay, thanks So, again I think we'll dispense with the updates we can take care of that I think next week So very quickly from an action item review the TSC representation policy draft I haven't had a chance to do that but I will try and get that done before next week Finalize the technical face-to-face state venue and that was done Thank you Todd, and thank you to DTTC For those of you who haven't registered and are putting on attending it sounds like it's going to be potentially very exciting with the possibility we may do some exploration of integrating the EVM with Hyperledger's fabric Again, I think there's a number of different paths that we could take there The white paper draft Dave, I'm assuming that's still on track for next week? Well, I didn't promise it next week and I'm not promising it for next week we are aiming to have at the start of next Wednesday's meeting what we think is the first draft, we're going to go all the way through it then it will be available but it's not promised Fair enough Set up the Sawtooth lake, we talked about that earlier connect Patrick with Rai that's the same thing create the fabric API repo Tomas, that's still to be D but I think you're going to have that in time for next week We do our best to work on this but this is really a split of an existing repo that we use in our development internally and it's quite disruptive for internal processes therefore it took a bit longer than I anticipated but we are working on it and hopefully we will get that until then Okay, and then last but not least we were going to try and set up some time to discuss the exit criteria and again, I think the best thing to do would be as we're together next week hopefully there will be enough of us that could get together and start discussing it face to face and we'll try and fit that in as we can So let's get into the planning The plan for the face to face would be very similar to what we had the last time I think there will be a steady state of hacking on various experiments I think the EVM experimentation is potentially a good idea I'm planning on chatting behind the scenes with Dan on the potentially hack on between Sawtooth Lake and the fabric and then again Dan if you have any specific ideas for Sawtooth Lake again, from a technical perspective be certainly interested in hearing about those and getting people prepared for any of those hacks and then there's still, I think, a little bit of work to be done on the integration of the DH code and the repository that they're going to be setting up the fabric API and then there's I think also going to be plenty of opportunity to hack on the infrastructure there's some bots that we could set up to integrate Slack and Git and so forth I've been playing around with that but I think it would be interesting to start doing some of that work and then potentially we could also think about starting some of the transition from Travis to Jenkins so those are some of the thoughts that I had for the face-to-face but there's also then the workgroups so there's the requirements, the white paper and architecture workgroups and so maybe we could just spend a few minutes here trying to figure out when the right timing for these would be so that we can avoid any potential conflicts so I think, Christopher, you were trying to get Thursday I think for the identity Yeah, so we have a few people that could only make it on Wednesday so it's a very, very thing, at least you are from me and you are We only had we had some people that could not make it on Friday when we did our doodle poll so we had requested that time and then when we talked yesterday with the architecture a more casual poll so that they would prefer Friday and the topic of which day would be better wasn't brought up with the requirements working group So Patrick, what was your thinking for the requirements? Is Patrick still on? I think he had to leave I will follow up with Patrick So this is Ram We were thinking of Friday AM for the architecture workgroup Okay because if we are able to do Thursday AM because there is a little overlap between the requirements working group and not working group maybe not too much And Dave, what about you and the requirement and the white paper rather? We didn't actually talk about it we kind of lost track I guess Wednesday is our typical meeting but I'm sorry are we talking about presentation to the group? Are you going to want to sort of spend breakout time with the white paper working group or were you going to not plan on actually working on the white paper during the face to face? Yeah so well I'm going to reach out to the members and just find out what is available for next Wednesday and if not because of consensus then we will reschedule and if Friday looks to be the better of the two days then we will do that but we didn't actually get a chance to organize that Okay Well I think since Patrick is not unfortunately I think Chris having the identity Thursday is that the thinking? No we don't need I'm thinking give us an hour and a half two hours and we'll do great I just thought it would be really useful to have a specific amount I mean I was thinking 11 to 1230 or something like that and we can if there was a room that also had a conference call phone we might be able to include some of the people who couldn't make the actual face to face Yeah so I'm not sure I suspect that we're going to be in the same place we had the original face to face in January Todd is that correct? Yeah it's very similar space they may be able to partition off a section of it I'll just check in with our events team It's not vital it was just a possibility but there's 11 people from identity doodle out of 30 some odd total who said yeah we're going to be there and we want to talk about identity Okay alright that sounds good so maybe you said you wanted two hours 1030 to 1230 and then that would work too and Ron you wanted the mornings at 9 to noon or is that what you were thinking? Yeah that would work great 9 to noon and again if we have a column it would be great but that's optional Okay and then if there's any overlap with requirements they can be working on the afternoon of Thursday and hopefully that will be effective enough to break out so I'll sketch that out and I'll send it around and if we hear any alien screaming we can adjust it on Slack or on the mailing list so we do have actually a few minutes left so actually Dan is there any have you guys given any thought to any hacks, any explorations that you might want to offer up for South Tuesday? Yeah so we hadn't previously considered some of the stuff that came up from the Ethereum talk today but we had independently been looking at some projects that could be done there so why don't I circle back with my team and maybe I can sync up with you tomorrow on that? Tomorrow I'm traveling but I can probably Slack or something in the morning in the late morning Okay well one way or another we can connect on that and I can otherwise circulate stuff generally on some of the Slack channels Okay awesome and anyone who has an idea for that they might want to try out if they're going to be there and they want to get a few people to work with them on something I think it would be interesting to see some of these organically happen as well. I think actually one of my colleagues had been exploring the last time on standing up the fabric infrastructure using Ansible and I think she said that she was continuing some of that work so that's a possibility as well so I think we have a few ideas on tap and again from my perspective I think the most important thing coming out of the hackathon is going to be even more collaboration. I've been watching very closely in the pull requests and the issues and the conversations and so forth I think there's definitely a steady trend towards increased collaboration there's pull requests coming in from a number of people that are not IBM and that's a good thing because again that's the goal here is to create a true community initiative out of this and I think hopefully once we get Sawtooth Lake up and running we can start seeing collaboration there as well and potentially and hopefully collaboration between and across Sawtooth Lake and fabric as we go forward so any, was somebody speaking or trying to speak? I was just affirming what you were saying at the end. Oh, okay, thanks. Great, thanks Dan. Alright, so we have about five minutes left, we could Patrick's not here I don't know, Chris and David's already basically given his update we do have a few minutes left Ron, do you want to give a brief update on architecture? Yes, so we met on Wednesday and also the discussion was around the consensus layer if you will and how it would be the requirements for how to isolate that and make it truly modular and independent of the business logic layer and we also discussed briefly on the other issues that we want to tackle so the plan is for the next meeting to kind of go over both the API for that function as well as the functional definition of what happens in the consensus so that's the particular topic that we wanted to kind of address so the overall plan is to have these kind of one-off topics to explore and as soon as the requirements use cases and requirements are mature then we will go down the path of formal architecture of functional decomposition Okay, that's Ron and then finally, Christopher, do you have an update from identity? No, we didn't meet, we're only meeting every other week and the main thing at the last meeting was that we planned to meet face to face rather than doing a call Alright, fair enough What are the times and dates of that face-to-face meeting for my identity folk? I mean the face is at the hyperledger face-to-face and we requested Thursday At what time? 10.30 to 12.30 10.30 to 12.30 and the location, I'm just relaying this information for one of my... This would be at the hackathon, at the face-to-face at DTCC And Andrew, actually there's a calendar, in terms of the bi-weekly or weekly meetings for some of the various work groups there's a calendar off the wiki and all the meetings posted Thank you And I can give a brief update on the CI so we had a call with the Lynx Foundation's release integration, release engineering team and we went through and I tried to capture most of the conversation highlights in Slack and that's in the CI pipeline channel if you're interested. The basics were sort of as follows. Again, I think we have I think that the quality and the security of what we're developing here is of paramount importance for the project and it was pretty clear talking with Andrew and the team that we were probably going to want to be using Garrett to manage the review process itself It gives us a lot more rigor around establishing certain criteria before you actually merge a piece of code and require multiple reviewers to have reviewed it and said that given their plus two, if they're committers we can also integrate requirements that the build pass, the build and the tests we can articulate different paths of testing so we can have smoke tests and we can have the full integration tests and they've got the other value here is that it basically can also ensure all of the legal aspects in terms of making sure that everybody signed the DCO I've got a bot right now for the fabric that does an okay job but it's actually not checking on the merge as to whether or not the DCO has been signed off it's just checking it on the actual pull request event so there's a number of things and so I think Garrett is definitely something that we need to be working towards now the trick of course becomes that if you adopt Garrett then that means we are essentially turning off all commits to GitHub except via Garrett itself because only Garrett basically the source of truth becomes Garrett and not GitHub is just a mirror of what Garrett is maintaining but this means that basically it sort of cascades so then that means well okay so then you can't do issue tracking using GitHub and that means we'll have to either adopt one of the tools that the Linux Foundation can run which is either Jira or Bugzilla I think on the call most of the people that were there seemed to express a preference for Jira but they're roughly equivalent but again the Linux team has great integration between Jira Garrett and Slack and so forth already sort of in the can and then we explored okay so then what would it take to migrate all the issues out of GitHub and Jira and apparently there is full automation for that and it should be fairly straightforward and not very complex we could probably do it very quickly the Linux Foundation team Andrew and team basically made also a strong recommendation for use of Jenkins I know that the Fabric team has been using Travis but again that's a fairly new development the problem with Travis of course is that it's free and so therefore we don't really get support we could pay for it but we're also since we're already going to be paying the Linux Foundation for the tooling they can support Jenkins 24-7 and they operate it they can scale it and they've again they've got the experience for running a number of different projects so they did a really good sell job that we probably should be transitioning Jenkins and of course the other aspect that was important is that Satu's Lake is already using Jenkins and so one of the two projects was incubating projects was going to need to make a change and since the since they did such a great job of selling Jenkins it looks like it'll be that we'll have to migrate the Travis scripting over to Jenkins which that really shouldn't be too hard and again we really do need to make sure that we're driving towards a consistent set of tooling so I think working on the Garrett Jenkins Jira path and actually I don't remember Dan if you had a concern I don't think you had a concern about using Jira for Satu's Lake but and then finally you know they sort of also made a pitch that you know because we're going to be using multiple repositories you know with the fabric API and the fabric and with multiple repositories for Satu's Lake that it probably would be in our best interest to actually start a full release integration release engineering function not just continuous integration but you know a full release engineering aspect and actually have a full-time Linux foundation staffer manning that formally and I think that was also I think a great idea and then finally there was the wiki and so I think we have to do a little bit of exploration as to whether or not we can continue to use the github wiki once everything gets locked down and run through Garrett if we do then we'll have to explore whether you use media wiki or one of the other popular wiki tools but again the Linux foundation team has experience running those as well so I don't think that'll be a problem. It'll be a little bit disruptive as we move things over but I think for the most part this is likely to be the plan going forward and I'm hoping that maybe we can make a little bit of progress with this at the hackathon. So that's my update. Any questions? Comments? Concerns? If not I think we're at end of job. I look forward to seeing many of you at the hackathon and I think we're adjourned. Thanks guys. Thank you. Bye.