 Thanks everyone for coming too. This has been a big effort putting on this conference And it's really fulfilling for me to see so many people show up here, and I'm glad everyone's here and enjoying it Thanks Okay, so basically What I wanted to do in this talk is essentially just give an overview of this issue. I'm gonna try to provide Sort of like a map or a framework. I'm not really. I'm not really trying to Advocate or anything for one specific thing I just kind of find this concept interesting and there's a lot of it gets into lots of little nitty gritty connected details so I think it's Just like touches on various things basically the background context is You know back in November when Bitcoin ABC and the various teams released their plans for the road map those things in there that you know Everyone kind of seems obvious like let's raise the block size limit again And let's reactivate op codes But then there was this this other thing written in there Let's move towards canonical transaction order. Maybe removing the ordering current order rule As a first step, which seems kind of like I think a lot of people weren't really Aware of this or it seems kind of like it just came out a left field like what is this this topic? Why are they talking about this? So that's basically what I'm gonna try to go over today kind of like touch on what this is all about It's an interesting topic because it gets into all kinds of details of how how things work in the theory and also just how it's implemented This is my kind of mental concept Hope you guys can understand this diagram So basically the little blocks the little squares are supposed to be transactions and at the top That's kind of like what I guess we'd probably imagine what the blockchain looks like it's a bunch of Transactions they're put into merkle trees the merkle roots in the block header the block headers all point back to each other forming a chain And that puts a time stamp on transactions But there's also another way that the transactions are connected together each transaction has Pointers to the outputs from other transactions so that when you spend the money It points to where that money came from essentially so this this proves that it's essentially a I guess I call it the causal DAG in this diagram It's a it's a directed acyclic graph So it's it all goes in one direction and it's proves that you know when you spend the money It's based on where the money was before and then it all has to All has to prop all has to make sense in time so you can't spend the money if it wasn't there previously and These are all connected through through hashes of the input of the outputs and the transactions and so on so basically the way that the way that the transactions are validated currently is That's what that big black arrow represents It essentially just scans through and Then you have a you have a UTXO So you know all your unspent outputs and you scan through each transaction one after the other and say like okay This one spends these outputs those exist so this is valid and we move on to the next one and so on and so forth so Basically what the effect of that is is that If you think about how if you look at the lower level where they're connected together in a causal way You can't you can't rearrange one You can't if you come to one like if you change the order of two of one that spends the other When you if you just scan through linearly like that when you get to it You'll say like oh this this output doesn't exist So therefore this is invalid and then the whole thing will fail so What that means is that the transactions there is that there's a partial ordering Enforced in the in the Merkle tree like in the top part so that those transactions They don't have to be Like you can change the order of the ones that aren't dependent on each other in a causal way But the ones that are have to be in order. So it's basically this partial ordering requirement And it's it's basically unclear Like it's something that's essentially due to how it's implemented. There's no real reason why the Why the transactions should be in order other than that if you don't do it, you'll fail when you validate it so If you start and this becomes if more important when you think about how do you parallelize this like right now I'm talking you scan through linearly one after the other So what do you want? What do you do if you want to parallelize validation? It's sort of a it's sort of a little weird Weird it seems backwards, but but the way you can do it is if you want to batch I just so to two batches here, but you in theory could have many batches So you just take bunches of transaction The first thing you do is you just add the outputs in you don't check the inputs So you can do that in parallel you can have the UTXO set in all your little parallel Processes you can just add the outputs to the UTXO then you can do a second parallel Process where then you go go back to the inputs and check all the inputs against it and you can and you can It's basically something that's parallelizable And then there's right now you'd have to do a step three if you wanted to do this Which is to because those two steps Don't check that order that I talked about so you could have transactions in the in the Merkle tree Where the one that spends the other one is before it like it's to the left there in the Merkle tree So basically if you do it this way that's step three the only reason You would do that is so that the old way would also still work But if you're going to do it this way it makes no sense like it's just a completely pointless step And it's and it's linear like you can't parallelize that because it's it's this causal chain So you need to check it. It's like a time-dependent thing. So you need to check it Through time. Oh, yeah one note the way when you add these when you do it this way when you do the outputs first and then connect the inputs after You're not explicitly checking that the inputs come from a previous thing A previous transaction But the only way it couldn't is if you break to the shaft 256 So you're basically assuming it's like a sort of an additional assumption, but I mean, I think we can Assume that shot 256 is valid or else we have bigger problems um Okay, so basically up till now I've said, okay, let's get rid of this order requirement We'll we'll just put the transactions in any order you want and you can parallelize the thing So what does canonical order mean like canonical means there's a there's an order that it has to be in It's not this partial order So that so that sounds kind of contradictory like I've just told you guys Hey, let's get rid of the order and then you can parallelize it now. What what's this canonical order we're talking about? um Basically the canonical order Would be an order. I guess I guess what I say here is the cause what I call the causal order That depends on the relationships between the transactions. So it's if you look at two transactions You don't really know what order they would go in unless you knew the relationships between them And if one spends the output of the other so That's very difficult to To do in parallel Whereas if you have a sorted order like if you have transactions say they're sorted by the hash of the transaction for example You can just look at two transactions And say yeah, but you're looking at just those two We know that they're in the correct order with respect to each other Or you can have two sorted lists and say this list and that list um Are in the correct order because we can just check check that The ends that you know, if you know it's all in order within it Um It's very easy and it's easy to combine them also. I think that was all I had to say there So why would you want to do this basically? um The advantage of it is that You don't have to deal with there's several advantages Which is interesting like the main one that a lot of people talk about is this block block propagation because If you do this if as long as you have the set of transactions You don't need to know what order they're in That's why I wrote in this box. It's kind of a weird way to like the way you can think about it If you want to is canonical order For anything where you're where you're not building the merkle tree You don't have to worry about the order So there isn't it's just a set of transactions and the only time you you need to think about the order Is when you're doing the part that the order depends on for everything else it doesn't matter So when you're trans transmitting them across the network All you have to do is provide the information of which transactions are in the block and then every time they will the The merkle tree will get built the same way and will have the same route Another interesting thing about this is I mentioned the sorted aspect So this allows proof of absence. So for example, if you know the transaction ID Of a transaction you can prove if that transaction is not in the block You can provide a compact proof that that's not in the block Um, so you can basically look at two adjacent transactions In the merkle tree and if if one is higher and one is lower Then the value that you're concerned about you know that The value that's between those is not there because that's the only place it could be because it has to be in that order Um, so this is useful for things like sharding Or or fraud proofs where you might want to Um have compact proofs of of absence of transactions Um, anyway, yeah, I guess my I guess this is what I'm talking about like For most things it just means you don't have to worry about ordering So for block propagation and stuff like that it removes all that The only time you have to worry about it is when you're validating when you're validating the way the merkle root Is in is in the header Okay Now i'm just going to touch on a few minor details One thing is what should the order be? The merkle tree is built out of transaction hashes So you hash the all the information and you you know, and then you combine it into a merkle tree Um, one sort of interesting thing is you can think like well, should that be the order? Should you just order them by the transaction hash? And you could like it's just in our it's just a arbitrary number basically and it's There's something but if you look at things like In the like for example thomas who just presented His mal fix proposal That takes apart these two concepts of the transaction hash and the id Um, so if you go to that maybe I should just flip to that other slide Like if you go to this one the bottom part would be connected by the id because that's the part where you need the id You need a unique identifier of each transaction But for the but for the blockchain you want the transaction hash Of all of all the data in the transaction Um to be to be what's used to build that tree because you're time stamping the information So you need to have proof that that information existed at that point in time Um, so anyway, so you can basically separate these two concepts of the the transaction id and then transaction hash And maybe it makes more sense to order it by transaction id Because that's what the trends if you're going to do absence proofs That's what you want to be able to prove is that the inputs Are absent. So anyway, there's just that's just one of the details Um I think i'm almost done So I guess I guess kind of my conclusion is that It's unclear like right now everything works fine. This is more of a Of a laying the groundwork kind of thing um It's sort of unclear. There's various options one option is okay. We'd maintain the status quo We have this partial ordering which is causal it seems sort of Not the most elegant thing, but it works and it's easy to make it we could easily scale, you know Probably many orders of magnitude With the way things are now You can do the removing the ordering Which allows you to parallelize Or you can move to this canonical ordering Um I mean the and the downside I guess is that the the easy the simple sort of naive implementation If you remove the ordering then that doesn't really work anymore. You have to do some kind of Parallelizable Method and to do yeah, and the other thing is it's kind of interesting It's one of these things where it seems like it touches on random unrelated areas like block propagation Um And these fraud proof ideas. So it's it's one of these things that it seems like it's a Sort of fundamental building block type type of thing So it's just sort of like one of those things that's kind of a sign that Maybe it's an interesting like an important thing to think about and it lays the groundwork for a lot of stuff that we might want to do in the future Anyway, I think that's all I have to say for now. Anyone has any questions? Thank you Anthony that was awesome Questions from the audience for Anthony And I have a quick question. Okay. So what inspired you to look at block ordering? Um, and then canonical ordering as as you know, partially inspiration for this presentation like what happened I don't know. That's a good question. I think I just find the I just found it interesting as a concept I just like to understand how how it works and Um, yeah, I don't know. That's hard to say It's more just out of my own interest in understanding how it all works And when I heard heard talk of it, I was like, what are they talking about? Basically Yeah, pita. Would you hear thanks for the beautiful presentation You mentioned that this is mainly laying the groundwork for now. Yeah But if I could Ask you to look through your own crystal ball What would you think would be the commercial applicability of this? There has to be A specific specific reason why this is of interest to you well I mean the applicability is that it allows potentially massive scaling in the future. Well, like You know, we're talking if if we're Masked like a hundred times a thousand times or whatever or more than current transaction rates. That's basically What this would enable So you could have like things like parallelizing maybe sharding So where you have You know, you can break up Instead of having one node that validates everything you can have nodes that validate subsections and can prove Or can at least prove if there's a problem in the subsection that they're validating So it's basically what can allow a bitcoin cash to scale to you know Like a hundred times visa levels or something like that type of thing Maybe I don't know it's there's different opinions on it. So I'm just trying to like kind of explain the concepts But I think it's there's a lot of interesting ideas in there Hi On that point actually I think another way to think about it is in terms of the complex computational complexity Of the scaling In terms of something like big o notation for example, so this feels like it could actually change the big o complexity of verification Do you Feel the same way or have you looked at that at all? Well, it reduced I mean it it lowers it down because it makes you everything and I'm not sure about Like basically right now it's like oh and like as you grow and this would lower most things down to Like oh log n or something like that. Exactly. That's what's exciting about it. Yeah, that's so so basically Yeah, that's exactly that's basically kind of I guess another way of answering the previous question is that it once you get Like like once you get to a certain level, it it allows you to keep growing Um without massively running into resources Yeah, any other questions um Miners change the 32 bit notes to um get the The resulting hash and they can run through those 32 bits incredibly quickly and then to Further change that they change the orders of the transaction Will this ever Change that how will miners be able to do that? Well, I mean you would just need to do something else other than changing the order There's lots of ways you can change something in the Merkel in one of the transactions um Okay, I mean you could put it I mean, I don't know I can think of like random things off the top of my head But yeah, they would just have to grind something else you could So I don't I don't know. Okay. Thanks. Yeah, that that would affect that. That's for sure Anyone we take the last question of the day Who wants the last question of the day? Come on now. I can ask it. So do you anticipate this being something that's uh, relatively easy to switch to a bit coin cache or should it be deployed on, you know Something else to try it out. I mean, I don't think it's that I don't think it's Conceptually it's not that hard to do It's just a matter of making sure it plays well with with all the pieces that connect in there Like for example, this doesn't affect spv clients at all. Everything is the same. It's just the transactions I mean are in a different order, but when you get your merkle treat merkle paths to them and all that stuff That would all be the same um I mean, obviously it's it's like a fundamental change to what's to how things are working So you'd have to test be very careful and make sure it's well tested and implemented but Conceptually it's not it's nothing too crazy. So I don't know if that answers the question, but And with that, let's give a round of applause to Anthony ziggers and our honor plus us. Thank you very much