 My name is Zach Mitten. I work at Consensus and I've been doing research on Merkle computers. Probably heard of Trubit. Trubit is an example of a Merkle computer. It's the main project working on this. And so I've been collaborating with that team and working at Consensus researching this stuff. So do I see that there's like a few main strategies for scaling Ethereum? I want to talk about an idea I had today that's similar to plasma. It utilizes a Merkle computer like Trubit. However the details are a little bit different. I wrote about it because I wrote about it before plasma came out. The key reason blockchains aren't scalable is that all full nodes have to perform all computation. Therefore the feature of any scaling solution is to introduce computation that not every node has to perform. This is true with sharding state channels and Merkle computer networks. They all try to accomplish it in different ways. So first I want to ask why blockchain? In constructing a new system we want to sort of filter out what are the fundamentals. It's important to define what blockchain is providing because the goal is to preserve those particular qualities while scaling or to at least approach those guarantees. A lot of stuff we associate with blockchain is possible with asymmetric cryptography. So what did the first use of blockchain solve? The double spend problem. But with Ethereum we have more than just a currency. So why do we need blockchain for all this other stuff? I think more generally it can be defined as the disagreement of chronology of events separated over a distance. The double spend is actually just a subset of that particular problem. I realized something kind of cool actually. This is a fundamental problem ingrained into the nature of physics. You can drive it from Einstein's relativity. The effects are overshadowed by network delay here on earth. Once we colonize Mars maybe this will become a factor. We should think of Ethereum as this extremely expensive computing machine that has the superpower that it can come to some agreement on the order of events. We want it to be expensive so that it stays small. Quick question. Who in here runs a full node like on Ethereum? Who actually runs an archive node like two people? Technically an archive node on Ethereum is what a Bitcoin full node would be because it's the only type of node that validates every single transaction from the Genesis. I don't run a full node. It got too big for my laptop. With the money application, including Ether, scarcity is essential with any money application and I think you need to audit the entire blockchain in order to calculate the supply. If you're not doing that, that's okay but you're trusting the two other people in here that are doing it. There's a certain healthy ratio of people who should probably be running full nodes. I don't know what that number is. It depends on how much risk you find agreeable. I think we need to keep the blockchain small so more people run it but that creates a problem that dApps are too expensive on such a limited machine and this is why we need Layer 2, Miracle Computer Network. We don't need the same redundancy in Layer 2 because we don't intend to have the entire global economy resting on it. A huge advantage over Bitcoin with Ethereum is that you can define a really rich interaction between one small smart contract and a potentially huge side network. So we have a turing complete language to define how those layers sort of lock together. A very simple Layer 2 example is a state channel. Alice and Bob make payment promises back and forth. Victor just called them checks. And all we see is 10th locked in a smart contract. We don't need to know what's going on in the second layer. A Miracle Computer Network like Trubit can enable computation to be done off chain and verified for correctness on chain even if the computation is way too large to perform on chain. So let's take a look at how that works. First, a task gets posted to this contract. Then a subset of the nodes compute the solution to the task using a special Miracle Computer. Someone is chosen and they post their solution to the blockchain. It's still pending for a time period where anyone who disagrees can post their answer as well. So there's two different nodes posting two different answers. Now we need to find out which one is correct and which one's lying. The naive way to find out would be to recompute the whole task on chain. And Ethereum would give us a definite answer. But again, we're dealing with large computational tasks. So Ethereum actually won't even let us do this. It's beyond the gas limit. That's OK, though, because we don't have to compute the entire computation on chain. Because we use this special Miracle Computer, every point in the calculation of this computer, we have a hash lock of everything in storage, memory, on the stack, and the current machine head. So we can quickly pinpoint the exact operation where we disagreed. And this single operation then can be computed on chain to determine who is being untruthful. Since we use a binary search to pinpoint it, it scales logarithmically. And that's the really cool part about TrueBit. For example, if a computation requires a billion operations, we only have to do 30 to find the on chain bit. The application for this is immense, nearly any DAP that requires scaling, or any computational task that's too large to do on chain. But for the rest of this talk, I want to highlight a specific use case of this system. Instead of just using a Miracle Computer for one-off tasks, we can use it to set up an ongoing relationship with a whole other blockchain. Why would we do this? I think this can solve everything we've been trying to achieve with permission blockchains, sometimes referred to as consortium chains. This is a blockchain that can be publicly read, but only certain parties can write blocks to it. Businesses experiment with these because they have some advantages. They can perform. They can get them to perform transactions faster. They can change some of the rules customizing. And they can customize some things. And most importantly, they can expand the data throughput. However, there's some huge flaws. They don't solve the chronology problem. Because of the parties making the signatures, these chosen parties, they can always go back and re-sign blocks in all kinds of different ways. And this costs them nothing. OK, we can solve this with a Miracle Computer. Imagine we do two things. First, we code all of the rules of this consortium chain into a smart contract on the main net. Then we periodically stamp the state into that contract. I borrowed this slide from Christian White-Wisner on a plasma talk that he did. It's a very similar idea. So as you can see, the main net is running along. And as things come in, we're stamping the state route back to main net. And basically what that does is it locks a certain type of ordering. It's not perfect. It doesn't lock perfect ordering. But it at least gives us something that this child chain can't go back on. So the authorities of the child chain are including transactions periodically stamping them into the main chain. Once confirmed to main net, they can't reorder. Now all the users just inherited that insurance. But what about validation? So you can't arbitrarily reorder. What about validation, though? The main chain isn't validating transitions. The authority can still arbitrarily change balances. But now you have this Merkle computer network playing watchdog on the child chain. They see everything it does. And as soon as they notice any fraud taking place, they can challenge it back on main net. Because of the way this works, the correct party will inevitably win any challenge. We can also do light clients with this. Let's see if it lets me go back. So basically, if this state hash up here was invalid, someone would challenge that on main net. And then the computation would go through, and main net would invalidate that block. So it's not just the one thing that was invalid. It's that whole block. And now we're back to here. So now we can actually do light clients with this. All light client has to do is follow this one contract on main net. It doesn't actually have to follow any of the intermediary stuff. Now it's important to note that none of this actually should ever happen as far as the fraud proving. Because it only happens in the catastrophic case where the chain authority is attempting to cheat its users. So it's just the threat of it being there that really makes this possible. And I think if this does happen and the fraud is detected, I think at that point, all the users of this child chain, they can submit proofs of their data and they can basically just exit back to the main chain. Assuming we can build a robust Merkle computer network, we approach the same security guarantees of main net. It does take longer to reach statistical finality. But on the plus side, I see no reason why this child chain wouldn't be able to support thousands of transactions per second. Now everything I've said so far is actually pretty congruent with the way Plasma works. But I want to take it a little bit further. Question is, why should this child computer even be a blockchain itself? And I attest that I think it shouldn't. Because we've already inherited the beneficial properties of the blockchain by tying to main net. But the child chain can just be a traditional server. Imagine requesting a transaction and getting an instant update with signature and proof from that server. We don't have to reason about mem pools or nodes with different orderings. It's a centralized server. It computes the answer immediately and tells you you were transaction number 697. It's a signed message and you can prove it. So now that everything is provable, it doesn't actually matter if we give one authority the ability to write these blocks. Because anything they do that's wrong, we can immediately write. And if they're behaving like that, we can just exit out of the chain and never use it again. So to summarize, what we have made is essentially the most transparent centralized server ever created. Every step of its computation is being openly watched and challenged if it's to be proven fraudulent. So not everything is perfect yet. We still have a couple of big problems that we're working out. Data availability is a really tough thing. The Swarm guys just went on and talked a little bit about that. We've gotten to the point where I don't think naively that we can just solve this. It seems like it's the general problem of data availability that we're facing that, much like a lot of these other projects are facing. But however, Trubit is building a Doge Ethereum bridge as a mini proof of concept for this. And what's nice about the bridges and that particular application bridging two blockchains together is that you don't have to worry about data availability. Because blockchains are available. They make themselves available through their incentive structure. So I'm also currently researching the proper mechanism design to encourage this robust watchdog network to be there. Because we can all run nodes, but at some point we want to actually get paid to do it. And we want to make sure that we have a healthy network. That's about it. Thanks, guys. Thank you. Thank you.