 Hello, my name is Colin Cantrell and I'm here to give a presentation on scaling blockchains. So this has been a topic of great discussion since about 2015 is when the scaling blockchain conferences really began, and it's proven to be a very, very difficult feat to achieve. And many people have been looking at varying types of solutions. So with this presentation, essentially we're planning on making it available for you to understand some of the challenges that we have with the blockchain industry and also some things that we can do to improve that overall scalability. So why don't blockchain scale? Because they rely on technologies that we're not engineered specifically for blockchains such as Google's level databases, which we'll get more into that details on one of the next slides. But essentially, LevelDB was designed as a storage engine that could handle petabytes with the data, really large data sets, right? And it does this through a type of concept called log structure of merge trees, which is essentially, you know, appended on top of a sort of string tables, where it's just a bunch of tables and you sort them and then you find the right table through, you know, log structure tree, so on and so forth. But, you know, one of the issues with it is not necessarily that it's a bad technology, but that it was not engineered specifically for blockchains. There really hasn't been any technology that's, you know, engineered specifically for blockchain. If any of you are familiar with the Bitcoin code base, it uses the Berkeley database, 4.8.3 generally version for your wallet app file, right, which that uses a binary search tree, which is logarithmic complexity. And it also uses Google's LevelDB for the main blockchain store, right, to store some of your indexing files. So, you know, we started to see a lot of these limitations, Bitcoin upgrading from Berkeley database for the block indexes to Level, Google's LevelDB actually brought significant improvements, because it's not necessarily just about, you know, the size of the blockchain or the data, but it's how you sort that data, how you structure that data, which will determine your complexity when you're doing lookups. And blockchains are very read intensive, especially when we're dealing with virtual machines, right, because some of the states, those virtual machines need to be retrieved from the disk, right, sometimes a remote note. So anytime you add these different stages, it increases the overall time, which will decrease your overall throughput and capacity for, you know, the single dimensional structure. So, as I was saying, object lookup takes it best, oh log n, which gets slower as more transactions are stored in the chain. So, logarithmic time complexity is ideal when you're dealing with any sort of algorithms, not ideal with the exponential complexity. So let's say, you know, n squared, right, if you have 10 iterations and it's n squared, you end up needing to do 100 iterations, right, for an n squared for a dataset size of 10, where logarithmic, right, could be substantially lower, right. So, what ends up happening you have, I think on average like a million keys in the data store, log base two of that is about 17, right. So, each one of these numbers of n quote unquote for, you know, the disk lookups generally translates into a disk seek. And that is the most expensive operation where you essentially have to move to the certain position on disk, the physical position on disk, you know, to read that data straight off of that, right. So, a seek is your most expensive operation. And so, a database storage engine wants to minimize seeks as much as possible, right. And then the operating system will also optimize it through what's called paging virtual memory, where they will store your files in the spare unused RAM, so that you can seek it much quicker and not have to actually hit the disk. Hitting the disk can be very, very slow. So, they use also complex cryptographic algorithms that require a lot of computing cycles to process, such as hashing digital signature algorithms, right. Generally speaking block chains use was called elliptic curve digital signature algorithms, and based off of my personal tests. This is just my computer benchmarks are different for different computers right, but my personal computer, doing a brain pool 512 t one curve I multi threaded was able to get to maybe seven or 8,000 signature verifications per second. So you have a computational bottleneck there, right, along with these disk bottlenecks, and your data storage engine, right, the database that is has a big influence over your runtime. Because if it takes 17 iterations to find one key one state, then now you're 17 this seeks right for every operation or if you can make that one to seek. It would be much, much more efficient. So, block chains are single dimensional structures. So, they essentially are chaining in one direction, so you have a block as chain, the next block and the next block, and the concepts of it I'm sure all of you familiar is that if I change a bit, any bit and one of those previous blocks, it's going to break that chain. Right, so it's a single dimensional chain that allows you to reverse forward and backwards, right, as one dimension, you can consider that like a line. Okay. And so it links blocks together in a finite space or other finite space being able to traverse forward and backward. Now, a single dimension can only hold so much information, which is why reality is not one dimensional. A single dimension increases capacity exponentially, and that's very simply mathematically proven. If we have a world with, you know, integer size of 10, and it's a single dimensional world then you only have 10 possible positions that you can be in. But if that's a two dimensional world now. Now you have 100, right, 10 times 10, right, or another good example is 2 pi r, which is your circumference to pi r squared, right, r squared from r that goes up to two dimensional to get to your area to four thirds pi r cubed, it gets your actual volume, right. So, you know, every time you add a dimension, you're essentially exponentiate. So, you know, that's why calculus, when you do derivations and integrals, right, the first derivative of a position equation is your velocity and the second derivative is your acceleration. Because you're reducing the dimensions down from your position down to the actual moving position down to the movement within them, right. So you are traversing dimensions essentially, right, and I'm speaking purely in the sense of mathematics, right. And the reality that we exist in is encoded with these three, you know, spatial dimensions and then we have one world dimension per time, right, which essentially designates where everything is within the three dimensional space. But because we live in a three dimensional space, there's exponentially more capabilities to express an experience right because there's exponentially more possible places for information to be encoded. So blockchains are sequential, right, meaning transactions can't be processed in parallel. You can do synchronization, you know, Bitcoin Core did a really nice optimization for their synchronization processes where they download the headers and then they download the transactions and processes and parallel, which that's that's doable, right, but that's synchronization. When we get to the actual main net processing when any block comes in only one block gets to occupy the same height. Right. And if you have competitions over that, only one block wins. And so therefore it's one sequential series of events. So you can really only append and add if I have a transaction that's dependent on prior transaction. I have to let the prior transaction be fulfilled before I can process the current transaction. So by nature, the architecture is single dimensional, right, it's sequential, it's one after the next after the next, which is one reason why they have difficulty scaling. So also common blockchain virtual machines are 256 bits. So what that means is that they operate on 256 bit workers. So if I wanted to do a simple computation like one plus one equals two, it could pick up to 64 bytes of memory because each 256 bit integer is 32 bytes. Right. So 32 32. So what ends up happening is you end up having to do a lot of padding, and you have to do a lot of computation since physical processors only operate on 64 bit words at a time. That means you have to do four times as much computation to do the same amount. Right. So I did a simple benchmark test where I took a 256 bit big number, right, let's just say from open SSL or even, you know, MPC, and I could drop that in or open GMP. I could drop that in and, you know, if I were using a 64 bit integer, and I did one plus one equals two, it would be much faster than if I was using a 256 bit integer, right. In the real world 256 bits is not practical. That's pretty much the number of atoms in the known universe right to the power of 123 or 123 bits is more iterations and can be counted to with the known energy of the universe. You know, so that's where the integrity of these cryptographic functions comes from is that they become so difficult to break because these numbers are still astronomical large right. So, since we live in more of a 64 bit world right 64 bits is around 10 to the 17 if I'm not mistaken. So you end up having about 17 or 18 digits, right, and that number so you know we're talking quadrillions right you a bit more right now I mean the world we kind of deal in trillions right billion is nine digits trillion is 12 digits so you know, in a global economic system and trillions a lot, right, and, you know, a quantity of a trillion of anything in the physical real world. That's a lot too. We don't really generally deal with anything larger than billions maybe we're starting to get into trillions on to some of the inflationary currencies, but in practical real world terms, you know, we generally deal with millions and billions right. Well, sometimes touching at the trillions but you know in practical real world smart contracts 64 bits is sufficient so 256 bits can still be utilized but not for computation. Okay, and that's one thing that will we distinguish with our virtual machine design. So, most blockchains use trees for data stores. As I was mentioning earlier, the, the Google's level database is what's called a log structured merge tree. Now that's just a fancy name for you write it all in memory and then you dump it to Dix everyone disk every once in a while. And they're essentially just big sorted string tables right so it can handle a lot of data. There's a lot more than a conventional binary search tree, like Berkeley database is a binary search tree, and it gets slow. It doesn't handle itself very well. I tested this directly by running, you know, level to be side by side with our lower level database versus Berkeley database and Berkeley database is the first one to, you know, prove all over itself pretty much, you know, it actually will crash my computer. So, you know, when you're dealing with, you know, disk seeks disk access right, you want to minimize the seeks, and every layer of that tree is one district so you're only at a million keys which is not a whole lot dealing with you know global transaction network. You already have 17 disk iterations per lookup. Right. So then if a transaction requires five lookups or if it's a complex smart contracting language maybe 20 lookups, you multiply the complexity exponentially right and you're really throttling yourself on the disk. So, as the data set grows the source complexity is logarithmic like I was saying million keys it's about 17 and then you know you keep going up and it'll go 181920 and you'll still be able to get a large data set but you still are adding these disk seeks at infinitum right you just continue to add more and more just six right there they're not constant time. So, these data store techniques are optimized for read performance right, but they're not designed specifically for blockchains and one thing that blockchains do such as their cryptographic objects that are always indexed by hash right and so your key indexing is the result of the data. So, you know, one optimization we've done with our lower level database is we don't directly store the keys we store a compressed form of the key through the hash map, and based on my results compared to prior database versions, I have about a 30% reduction in disk utilization. So, no plugging the loyal database in the Bitcoin for instance, could save you know, depending on the size of blockchain but you'd save about 30% off the blockchain size right just by simple optimization of knowing your data structures right and designing the disk data storage mechanism to work with that right to be optimized with that. So, the database and dist access is your fundamental bottleneck through virtual machine processes, if you need to compute on two values or if you're calling other contract or any sort of you know complex operations in the virtual machine. It's going to require a disk access. Now if your disk access is required 17 iterations and then it keeps getting slower and slower and slower, what's going to end up happening is your blockchains not only going to be really big and really difficult to download. But the key storage is going to have a lot of data to sift through to find the basic fundamentals of what it needs to do. Right. So, ideally, we want order of what, which means constant time. Since blockchains are growing costs and insights right and that's what I've always seen as kind of like a holy grail. If we can reach this constant time on the disk access. That's going to be fundamental for blockchains even when we start introducing shorting techniques. So, blockchains rely on Moore's law. Now this was outlined in the original Bitcoin white paper and if you see some of Satoshi Nakamoto's original posts. He discussed a lot about the scaling characteristics of blockchain based on Moore's law where you know he basically said, you know in 20 years 30 years, you know sending a high definition video over the Internet's not going to seem like that big of a feat. And you know that's held true for the most part, especially you know the logic is very sound that when you create a block size, you know, you have a linear growth in the blockchain's data set. But Moore's law is growing exponentially so thus you'll always have spare capacity, right and that's that's the premise and the principles behind Moore's law is a scaling solution, but this is not always held true. Right Moore's law is already weakening. It's not exactly what it was said to be. It's not growing as fast as it once did. Due to issues like quantum tunneling. And if you haven't noticed processors, their clock speed has become somewhat saturated around you know 567 gigahertz. You don't really see clock speeds much higher than that you just see a course 16 course 32 course. Because they're they're running in issues with the crystal oscillation and quantum tunneling and when you start shrinking these transistors too close a quantum tunneling is essentially you have a barrier here and there's an electron over on this side. And you know it should you know bounce off this barrier, but it just sometimes decides to just disappear and reappear on the other side. So you have enough quantum tunneling and imagine this is a barrier for let's say the gate right or your, your collector for your, or your base sorry for your transistor, and you're going to collect electrons on there what that's going to create is a voltage. And when you induce a high enough voltage then that is going to switch on right and essentially, it's going to be flipping on when it shouldn't be right and quantum tunneling also has been destroying some of the processors right like they hop over and we've run into this, this barrier right we're not like added head on yet but this you know, instead of it being this exponential growth it's kind of starting to form more like a sigmoid curve right and you know I believe as we see more over time we saw that more you know in the growth here but as the sigmoid tapers off, it slows down right it becomes more logarithmic and it's increases. So, modern software because of this relies heavily on multithreading, right. You know, essentially multithreading is having two things happening at the same time, that's one thing quantum computers are breaching into is having actually every bit be able to be a one or zero or have two states at once right. So, we want to parallelizing because that's going to better, you know, utilize the time, which is going to increase our throughput, because when you can do more in the same amount of time, you have higher throughput. So, because of this clock seed saturation software has been optimized in modern age using multithreading techniques, it's been, you know, adding things also like atomic compare and swap, and that's you know been designed to help reduce this threat contingency multithreading, right. So there's been all of these really significant engineering feats just to really improve this multithreading nature of computation, because that's been, you know, one of the really significant ways that we've been able to scale computers out as we've scaled out the course, we've added more and more and more course, you have more course, then that's that many more simultaneous threads that can be operating. If you only have one core, then you rely on the thread scheduler from the operating to schedule events but really one core can only really handle one set at one time. So, having more cores gives you that parallelization, right. Now, as I was saying earlier, some, you know, have adapted synchronization processes for multiple threads, such as, you know, Bitcoin core, as it's been improving their synchronization logic but still everything must pass through a sequential bottleneck through the chain, right, since you have dependence, and the previous block has to be dependent and only one block can occupy a height at one time. It doesn't matter how much in multithreading, because that block is that block and it can only be appended at once right so the capacity that block has is the capacity, the network has right it's single dimensional single thread. So, I know you've probably heard a lot about sharding right so let's let's start to form a little bit of basis on the sharding and see how that kind of ties into all of this so sharding the ledger is a common response to the art of these architectural limitations, where the idea of sharding is that you can have multiple devices that are operating on a subset of the entire global chain, right. And that gives you this parallelization it's kind of like having different cores of a processor but you can think the processors all of the computers and then each device is like a new core, so that you can start to do more parallel processing right, and that's shown a lot of promise but it, you know, it does solve the issue of increasing that you're sorry the issue with a constantly increasing data set side, but it still succumbs to the same issues of logarithmic lookup complexity, the database so in the end you know you're just slowing down the inevitable right yeah you could have shards, but it's each shard still going to get slower, right, and slower and slower because the data sets are still going to grow without bounds, and each of these shards. So sharding is, I see a temporary solution to the architectural issues right. It will only give you higher throughput immediately until it degrades back into the same linear issues, right. So, since blockchains are dependent, right, I'm talking like a debit has to be matched with a credit or transfer with a claim or a transaction output needs to get spent by an input, right, they all have dependents shards can quickly degrade into sequential bottlenecks right and let's say, you know, I'm spending UTXO from another shard. I'm going to have to look up that shard in order to process here, which is then going to put my bottleneck on the other shard and it's kind of merge into this sequential process right so getting sharding right is very difficult and it's also very difficult to get it done securely, right, because you're splitting up your entire data set, which you know that's very good for essentially improving your throughput but you know ultimately you're going to succumb to security issues because now you've got less nodes working on each specific aspect right so sharding is very difficult to do and that's why we haven't really seen, you know, I'd say fully production ready sharded networks that we're talking like some serious transaction throughput. Why not layer two solutions to right this is this is another big thing layer two solutions, like the Lightning Network first emerged in the scaling debates right the blockchain scaling debates in 2015 or the scaling Bitcoin conferences I should spend 2015 to but as you can see this topology over on the right hand side, I know there's a lot of debate about, you know, that it makes it more decentralized because you're taking away from minors and so on so forth, but ultimately the way the mathematics works. Okay, is that liquidity is going to be necessary. So, if I want to send a transaction to Alice, but no I can't open up a channel to her directly because, let's say, Bitcoin is hard to lose $1,000 per opening and channel. I only have one channel with the pizza shop we go to. Oh okay well, you know, Alice has the pizza shop to so yeah in theory that's great we can just transact back and forth with one another then we can settle it on the main chain from already. You know, what's going to end up happening is these pizza shops are going to get bigger and more people are going to start adding and then they're going to provide a network effect right. It's, it's called Metcalfe's law, which states that the value of a telecommunication system is proportional to the square of the participants so when you add more participants in this little lightning network this lightning liquidity provider let's call it the pizza shop. Expansion grow the value of that specific hub which is going to drive more liquidity into it, which is going to cause more people to connect to it to psychosocial network because they need to send their coins to each other right. So what's going to happen is it's going to aggregate right because as the fee model right goes up the fees are going to continue to go up it's going to become more and more expensive, the availability of creating new channels will become less and less and less. So, as that goes less and less and less people are going to be forced to use channels that they already have open with their pizza shops and it's ultimately going to create these big liquidity pools and on the right hand side graphic you can see them already for me, you know, hub and spokes, right. So, it is still, you know, somewhat decentralized, but it, to me it creates a lot of opportunities for bad things to happen, right, you know liquidity providers can quickly become centralized hubs. You know, they don't solve the root of the issue, which is our fundamental architecture they're just adding a layer on top of that fundamental serialized blockchain architecture. The deposit sequence to you, it really resembles having a bank account right deposit my Bitcoin into this lightning account. I pay money to open up the account. I keep the account open. I can send my bitcoins in that little payment network from that little account bank and send everybody else I can send my cells. And then once I'm ready, I can withdraw my bitcoins and settle it back on the main chain. Sound familiar. Exactly. Right. So one reason I've not really gone towards the layer two solutions is I'm not convinced that that's going to maintain the required level of decentralization for these networks to truly take off we need. We need that level of decentralization by looking at the way the world is. If there's any point that can become corrupted, they will try, they will try. So we need to do our best to make it as resilient against that as possible, and that resilience comes from decentralization. And so everything to me should be on chain, right, if it's not on chain, then it's not as decentralized as it could be. So this presentation is going to outline how we do that. As I've seen you guys most likely have probably seen this argument, mining is slow and centralized, right, it creates a centralized arms race. It consumes a great deal of energy and it tends to create these centralized pools, right, much like layer two solutions do with the liquidity, right, except, you know, for the Lightning Network, the mining pools are the liquidity providers, right, how much Bitcoin you have, and how much liquidity you're able to provide in those channels, right. Now, most of the proof of work is wasted, as well. So even though the network's running at a given hash rate, only one hash wins. And that ultimately becomes the block. So because of this mining pools tend to be the only way people can earn money for mining. Many people have abandoned proof of work altogether to go to pure proof of stake, but that approach in itself has its own issues as well. So where is this all going. We've been designing and implementing new blockchain architectures for over seven years. The project is called Nexus. I'm sure you presented on our security focus operating system design last year. And essentially, we have taken the concepts of pretty much that have been outlined to us just through simple mathematics and reality to basically achieve maximum throughput in a multi dimensional sense, right. So what we've done now currently is we're still in the single dimensional, we're still in the single dimensional phase, which is called Tritium. It's Tritium and Tritium Plus Plus. We're onto Tritium Plus Plus right now. Tritium was released a couple years ago. And some of the benchmark tests that I was able to do this is a live network over local host. Okay. So it didn't have bandwidth as a limitation because I wanted to see just the pure processing capacity. This was all on one computer. And I had about eight different command prompts scripts running this hammering the node with transaction requests. And then I had a node processing the transactions. So I'm sharing the same resources that's producing the transactions and processing the transactions. And that tapped out at about 10,000 to 12,000 contracts per second, which is about five megabytes per second. And most of my computing cycles are going to be generating in the transactions. It process straight through the chain. Not a problem. And so one of the reasons that we're still in the single dimensional phase is, you know, as we multiply out exponentially by in adding dimensions right think X versus X squared. We also multiply our margin of error. So we're focused on perfecting the single dimensional blockchain first to get is all the bottlenecks out of a single dimensional layer because that single dimensional layer is then going to be created into a multi dimensional object. So if we have inefficiencies in our fundamental layer in a single dimensional sense that's going to multiply out right so we wanted to clear and get as many bottlenecks out as possible. And each shard is not going to process 12,000 contracts per second of that showing just a maximum on my specific hardware with bandwidth not of an issue to see how well everything processed and benchmarked and I was I was certainly very impressed to be able to reach that level. Our register based virtual machine operates anywhere from 10 million to 70 million instructions per second, or on 10 to seven megahertz. So this virtual machine. Essentially, it's a registered based architecture I went down into the really significant details I actually created a registered based virtual machine that ends up being a registered memory manager with 64 bit registers that fit directly in the inside cache of your processor right since it's, it's not amortized it's one for one it fits in it's software designed to be as close to the processor as possible. So it can really run everything just right off the processor so you don't get this memory link disease. And so that's how we were able to reach 10 to 70 megahertz. I when I actually wrote that, you know, going from a different data objected substantially improve the performance. Our database also is order of one. And I've tested it up to 500 million keys, and it operates at 450,000 weeks per second, compared to Google's level DB, which as I said earlier, operates Bitcoin and Ethereum. And that tops out about 80,000 and that's being very generous that's that's tops out on average, it does probably about 40,000 weeks per second 50,000. But if I if I flush the pages, right, and I start over fresh or I try to read, write 1000 keys and then write 10 million keys and then read those first 1000 keys. The read performance goes down to about 10,000, maybe, if that 8,000, but the lower level database stays 450. Sometimes it can go 150 to 450, depending on the page and conditions right, but it stays constant time and I mean constant as in a consistent number of operations. So it only requires three districts to read any record of any size anywhere. Right. And I've done that through a combination of, you know, I guess you could say multi dimensional hash maps and multi layered bone filters and Fibonacci linear probing for forward and reverse linear probing and a couple other aspects but that database has been pretty much up and developing it since about 2016. I got a really, the last iteration was developed and deployed with Tritium and this next iteration is going to be deployed with Tritium plus plus. And as I said, it's constant time straight through. So, on average, my, my node will process anywhere from 30,000 to 100,000 contracts per second. What I mean by process is, as soon as you receive the block, you process all the transactions, and then commit that to disk. You can say that time right processing all of those that is processing the virtual machine, all the bytecode, all the transactions, that time is, you know, about 30 to 100,000 contracts per second. It's really, really fast because of the way that we've designed it with the pre states and we really package the pre state into a contract object is like a self contained object, and you can take that contract object and it has everything it needs to self execute, because with us the contract is registered script, right, it contains a register pre state and post state, and a set of operations, and there's a primitive operation, which is like debit credit read, write, move, transfer claim condition validate like those are basic changes. And then we have the conditional virtual machine that is essentially conditional contract that's a pendant so you have a primitive operation that is controlled by this conditional contract that then contains a register script that it's operating on so a contract by nature operates on one register at a time. Okay. And there can be 100 contracts in a transaction and access. So, this ends up being very, very fast, and it also solves the issues with reorganization of the chain, where we can revert back to prior states, without having to reverse the computation. And then we can do it reliably which means if I have to disconnect a block and reconnect a new one, instead of having to just append with uncles, right, because as far as I'm aware thing is has a difficult time changing the state tree. We're able to actually just revert right back one right, and we also like I said the contract itself contains so this becomes really important when we get to the multi dimensional sense because right now, a contract is just bound to a transaction object. The transaction object is bound to block. Right. So, the transaction has its own multiple with all of its contracts. So the contract and its hash can actually be used as a miracle proof. So you really only need to have your block header and your transaction header, and you can just plug any contract in there right and the contract is self contained. I don't need to know everything that happened before that contract happened. As long as I have that contract and I have a valid proof into a miracle proof into the block. Then I know that that pre state is valid, which means I could have this could be the 100 million transaction operating on that specific register. All I need is that 100 million transaction and boom, I have it right, and then we'll get into this in a little more detail, but the signature chain architecture allows us also to not need all the signatures as a chain of signatures so you can actually just keep you know your tail. And because it's a chain right a signature chain is kind of like a mini blockchain, we can actually discard aspects that aren't needed anymore, right because we have these proofs and when you have a proof of the chain that's sealed on both ends, then you don't need certain pieces of information between those two points. Right. So that allows a very, very significant type of pruning of our signatures and our public keys and the signature chain, which is a massive reduction in the blockchain size and we do this securely. Right. It's, it's just as secure as keeping signatures. That's what's so beautiful about it right and that's how you know you have a good architecture and everything fits together really beautifully. And you start to find all these other adverse effects that you didn't intend that ended up being really valuable, not to say that about signature chains but the pre states is definitely been really fun to get into. So our architecture is multi dimensional. This is where we start getting multi dimensional. So a single dimensional chain locks a sequential series of events behind the head of the chain. Okay, we've expanded this concept into creating a 3D lattice based chain structure. So as you can see in the below diagram. So here's that center root to think of that is the very center think of the three dimensional block is like a root risk. Okay, you got your three and your three and you got that very center cube. Okay, and based on perspective that center queue can project into the other corner at the top most corner and the back most corner in the center. So that creates this chaining through the layers in this way. So think of the base layer is like a lattice. It's like a crystalline lattice that you have. These are your charge in your state channels, and then you add chain on top so you chain them to the side. So what ultimately happens is this root hash right here. Okay, this root hash ends up being the final reduced copy of all of the hash is so I need to know from here to here. Right. And what that tells me is that this entire structure now is a two dimensional change structure. So that means if I change any bit anywhere I'm going to break the change this way and break the change this way and ultimately I'm going to break my final root. Okay, so instead of thinking in chain little blocks together, we're chaining objects, right, and we're layering these objects together. And each one of these layers, the L1 and L2 and the L3 layers, they form different consensus mechanisms that check and balance one another, and each layer is responsible for a single dimension of chaining. Right, so they're only really dealing with one dimension of themselves, but then those get stacked together and reassemble right kind of like factory working on a simple piece, and then all of those are reassembled into a larger, larger object. So, this root cube is a very important part, because this kind of contains the reduced of all of these so that your root cube and its final hash represents the hash of the entire three dimensional object. Okay, and that means that if I change any bit in any one of those directions or places, it's going to break that root hash. Okay, it's going to break the lattice. So, if I wanted to attack this, okay, and I wanted to attack it from the L1 layer, I would have to build a new wax if I wanted to change or insert anything in this history. I'd have to change two dimensions of history because I'm not going to be able to reassemble these chains, right, that happened afterwards. So, you create this very rigid two dimensional object that has a lot more capacity, right, so you have this aggregation just like reality. We live in multiple dimensions so that we can store more information so we're chaining in multiple dimensions so that we afford ourselves this additional capacity. So, think in shapes, not lines. Okay, an artist evolves from stick figures to beautiful works of art, capturing the full multi dimensional picture and their skill. So, that analogy is basically saying, you know, when we're kids, we make stick figures, right, that's single dimensional, right, lines, right, then as we get older we start learning about shading circles and spheres and cubes and we start expanding out. I know it's a projection but what we're doing is we're mimicking those different aspects of those dimensions the shades come from your depth and your life. So, a blockchain is an informational recording system now providing an indisputable ordering and associations different pieces of data. So, creating a multi dimensional chaining system provides us the opportunity to store more data and parallel by using the natural formation of a shape to delineate charts. So, as I was explaining this, each one of these is our shards. Okay, let's just see a simple for shard object. Now, those have a chaining structure across. Okay, so they ended up creating a final multi dimensional piece. Now this shape is what would be swapped in or out so if I wanted to attack. I'm going to have to create that entire shape before that shape right, and I'm going to have to also have that shape accepted by an L2 layer, right, which if there's anything that's in conflict with another one, it's going to be looking at the total reputation. So, this, the depth or the significance or the weight of this data is fundamentally comes down to your trust and your amount of resources that you contributed to it. So, these L1 layers ultimately are going to aggregate their trust together, and they're going to join together to create this shape that then has an overall high higher exponential weight and no single participant would be able to pull off. So, this allows us to basically with a higher degree of security do this parallel processing because we have this checking and balancing happening between, and the reputation essentially prevents somebody from being able to just either try to put up a bunch of nodes, right, the reputation system somewhat like an immune system where it knows itself because it knows the participants that it's been working with, you know, all of the people know who they've been working with. And so if a new person comes in, you know, they have that opportunity to join the consensus process, but other people will be able to identify malicious behavior because they'll know what a trustworthy node does. And so when a node comes in and starts trying to spam or maybe they try to split a shard and create, you know, a conflict where they have a conflicted transaction, you, with nexus you can identify a conflict very easily. And we do that already in our memory pool in a single dimensional way if you produce another transaction that's in conflict with one, that transaction is not going to relay and, you know, other nodes will actually flag a block produced with that transaction as conflicted. And that block won't relay unless somebody builds a block on top of that, right. So, you know, there's a lot of really simple and elegant ways that we can identify this malicious behavior and that's going to be done on the per shard level because each one of these shards is agreeing with their neighbor when they send, you know, they're reduced to their hash across as they're building the lattices right and think of each one of these as having a different interval. Generally a three dimensional block will be at about one minute interval. So, the production of the L one and L two layers will be done over the course of the minute in real time, right they're going to be added independent in real time, and then the miners are meal three layer are going to follow and then seal up the minute behind, right. Multiple consensus layers also responsible for their own dimensional chaining as I was saying checking and balancing each other so you know we deal the L one layer moving shards chaining in this direction. Okay, and then passing the hash is over, and then we have the L two layer that's helping create the crossing caches. Okay. And that's combining together these so they're adding that on top, and then the L two is also going to receive these hashes from the mining shares on top, and it's going to leave that into the root cube. Okay, so that I'll get more into the details of the L one and L two. So, we ultimately get increased security and decentralization so as I was saying imagine it like a Rubik's cube this picture on the right kind of somewhat does it justice right where it shows you that center cube. That's an important aspect because that shows us the chaining on diagonally right through each of the vertex is vertex to vertex right and that forms those chains form that vertex is that center cube. The center cube then finally has gives us the root hash of the block right so by having multiple chains and all these different directions. We have these different lattices and these checking and balancing so you know as I was saying instead of swapping out a whole block, we're swapping out lattices or lattices and shapes of these objects, right, and we're resolving conflicts between the two. So what that ends up doing is an attacker now has to attack two dimensions, instead of one. Right, and then even another dimension on top of that. So, it exponentially increases the resources required to attack the chain, then when you add reputation into the system. It becomes even more difficult, because you essentially can't buy your way in. Right. And so all the people that have existed prior have a protection, right, and they can identify these malicious actors. So, these lattices are woven together and up to three layers. And, as I said it requires an attacker to coordinate many more pieces to create a competing lattice structure that also has to have exceeding trust, right, has to be beyond the amount of trust and the weight, which just becomes exceeding more and more and more difficult, even with somebody with a lot of mining power, they'd only have really influence over one dimension of the structure. Right, so that's why we believe combining together these multiple layers of consensus are really important for the long term decentralization and security of the systems. And just one alone, right, just like there's three branches of government to check and balance each other. These three consensus layers check and balance one another. So require an attacker to compromise all three of them to try to compromise the three dimensional block, right, and then to exceed the resources that everyone else contained. So, we use these three layers of consensus. Our three layers are the L one, the L two and the L three. The L one layer is responsible for creating single dimensional chains on the Z axis, as I was saying earlier, and there can be multiple L one shards or chains, and think of them own is their own little blockchain. Okay, so since it's a single dimensional object, they're their own individual blockchains, and those can process in parallel. Okay, and then those as I was saying are across the x axis and the y axis. So the L two layer is responsible for weaving shards together along the X, right, and it's also for resolving conflicts between a one shards. So we don't want shards to have to resolve conflicts between each other for one they don't know they have conflicts with one another if they do, because they're in their own chart. So the L two layer. In an aggregated processing, no environment, it's able to take these one shards and then it basically has a consensus process to deciding if there's two conflicting forks of an L one shard that they will ultimately help make that decision of what one of those is included based off of the highest amount of trust and weight, and then other nodes will obviously have to agree from the L two layer, so that you create this really robust consensus mechanism. Now the L two layers weighted by steak. Okay, and it's also got trust involvement as well. So you have to build up trust over time by consistently contributing resources to the network. We also have to have a steak, right, you have to have some nexus data available to lock it up to be using it for this this consensus process. Okay. So the L two layer is responsible for the computation of the final loop to right, and it becomes a final three block hash. So the L three layer. Okay, is the top layer and it's responsible for the seal of the L one and L two layers. As I said, the L one and L two were built in real time. Okay, and then once those finish their minute interval, then the miners are going to submit their hashes for the prior interval, make to seal up the prior block and the L two nodes are going to receive that, and then create their final approval. Right. And then the miners are going to start hashing on the next set. So the miners are one minute behind. Okay, and everything is going in real time, so that they're kind of wrapping up and sealing up the blocks. Right. And the miners have their own training structures as well. So the L three layer is, it's driven by shares. So it's not one hash rules all. Okay, it's a consistent process so you as a minor will be searching right off of a certain set of data inputs at a certain nonce values, and you'll be searching is to find a hash that has the highest amount of weight. Okay, because you'll want to have your highest amount of weight because that's going to determine how much you get paid. So we're going to have incentive as a minor to search for one hash that brings you the highest weight, and we're going to make the weight slightly exponential probably like 1.618, you know, or, you know, end to the 1.618 something in that range, so that, you know, you'll get more resources for higher weighted hashes so that you won't just try to spit out 10,000 hashes, because you'll make less money with that, you know, there will be a certain threshold the difficulty threshold and stuff right, but the idea of the L three layer is to utilize all of the computing power possible on the network by having everybody be able to contribute that hash, and all of those hashes are based off of the same data inputs that agree on the same root cube. So by hashing and submitting a share as a minor, they're verifying that they have checked that root cube and that it is a valid queue. So every one of these minors adds an additional layer of security onto it, where with it works with Bitcoin is one hash wins and that's it. Okay, and that person could be lying they could be trying to double spend and it doesn't matter because the highest weighted hash wins above anything else. And this allows that to be split up and create a secondary consensus among minors, right, that really ultimately decentralizes money protocol, much more, and this will more correctly and adequately utilize that computing power to allow people to, to fully, you know, receive that security from it. And I believe mining is very important. It's, it creates a lot of security. So, correctly utilized, I think it can be a very powerful instrument and proof mechanism, which is what we've done here. So, don't throw the baby out with the bathwater, right, mining by itself has its own issues, but when combined in a hybrid consensus mechanism with other mechanisms of proof it becomes highly effective and very secure. And proof of stake by itself, again, can be dangerous when market cap is small, as it becomes trivial to buy controlling interest in the network. And when we're talking about creating the systems that have the potential to stand up against, you know, the big banks. So, you know, it's very easy to buy, you know, into those voting systems. So we think, you know, we want to combine these multiple consensus algorithms and do it in this multi dimensional structure, right. And we get the best of all, and that goes to provide additional security to the shard and the state. So how do we shard the database data sharding becomes very problematic, especially when you have a dependent in another shard, right, like I have to credit the debit from another shark, because you're quickly going to integrate in a sequential because in order to verify that credit to verify the prior debit, right. So how do we operate this without needing to synchronize this global state for processing. Well, we're doing, and this is, it's a really fun way to use list. But locator identifier separation protocol essentially allows you to create a single mapping a single address on the internet that allows you to change Wi Fi or go to different place and you'll still have the same address. Right. So it's like having a phone number that doesn't change on the internet. So what we're doing since list allows us to create these addresses for any IPv6 crypto eid and authenticated eid is we're actually going to take a reduced checksum form of the data key, and we're going to use that as our index to look up so we can actually enter a node that has that piece of data just registers with the mapping system and says hey, you know I'm servicing this IPv6 address that IPv6 address could be 128 bit hash of the reduced key and index of the object that you're looking up. And what ends up happening is, you really can just open up a PC VIP connection to that hash to that IPv6 address. And then the other node that servicing that obviously would have to have their own server running to receive and deliver that data back. So we create this globally synchronized database where we don't have to iterate, you know, distributed hash tables or do all these complex lookups to try to find the object in a shorted state. Is it when one person writes into that state, right when I write to that IPv6 address and I write that in and I write it with a proof from the chain structure. Other nodes then I'll write that simultaneously. So instead of it being I write it and then I send it off to my notes and then they write it and they send it off to their notes which is generally how the propagation works and peer-to-peer networks. Everybody writes that in the same terms that you get this parallelization, but you also have this globally accessible common interface. Okay, for data objects, so that you don't need to seek and find and what short is this one on or what mapping lookup or what IP addresses or what cluster or what what what what what you just open up a TCP connection you open up a socket connection to the data object and the nodes that are servicing that give you the object right so that gives us a really common interface and a really beautiful way to handle it to where you start seeing the internet slightly differently. And with these addresses being identifiable and when we get to the Nexus protocol, you know, we'll afford the larger EIDs right because we're developing the one stack, which open Nexus execution stack that allows you to decouple this identifier from your locator. Right, and the identifiers will most likely allow 256 bits just for graphic operations, graphic objects, anything like that, right, and since each hash is generally supposed to be unique. Okay, based on, you know, the input data input node to data input should give you the same hash, you won't really run into collisions right hashing your objects and having the hash of your object be the lookup address because it's becomes very nice because you do lookup at your object and then you rehash it and you make sure that it matches the address so you can tell if it was tampered with. And as long as you know that hash has a miracle proof into a root queue into a 3dc right into a three dimensional block. You know that that was valid project and so just as I said the contract itself contain, I can download a contract from somebody or download a contract from my seat or even just download my sick chain and only takes a few hundred megabytes. You got the miracle proofs all the way there. Right, so it fits together really nicely right and the idea of a three dimensional block is that you don't want me to synchronize anymore. We shouldn't need to synchronize anymore, if we do it correctly. Right, the reason we have to synchronize because everything's locally bound to computing, but as we start adding out these different sharded parallels. And then we open up a lot of really unique opportunities. So, use what's been proven to work. Okay, most virtual machines are stack based like the Java virtual machine or the Ethereum virtual machine. Modern processors have we're called registers. Okay, and these store values being operated on, which are more efficient architectures. Okay, so we don't use stacks and computers that just old take those are slow. Okay, we use registers and register lookups are order of one for constant time. Right, and we're able to reach very, very high throughput so with our register based virtual machine as you can see above. That is a live demonstration showing a payload of 20 megabytes. This particular instance was doing a block every five seconds. So, that payload right there was about four megabytes per second, and 12,000 transactions per second 12,000 contracts. So, we do that with, you know, a lot of aggregation and like I said, this is a single dimensional chain. This is showing, you know, your theoretical maximum processing per shard. And like I said, we've wanted to protect or we've wanted to perfect a single dimensional chain, so that as we start adding more and more shards we don't multiply our inefficiencies, right, we don't want to do that we don't want to multiply our inefficiencies, whereas we want to be as clean mean as fast as possible because the faster it is means there's less instructions that the computer needs to operate it when there's less instructions. That means it takes less battery life. It runs on more devices, and it scales more right so we get as close that bare metal as we possibly can and we've done that with the database and our virtual machine, which I said operates, you know, up to seven megahertz at certain circumstances. We built it using registers and built our memory manager and all that. Right. We don't rely on external dependencies that were not created for blockchain or database the world database was designed specifically for blockchain. Right. So, we're adding more dimensions. We consider a blockchain as a subset of our architecture, right, the blockchain is like a single shark, a user level identity called a signature chain gives us this signature aggregation and other benefits as I was saying, we only need to keep the head and tail, we can discard a lot of the stuff in between, we can discard pre states all we need is the most recent pre state that modified that register and we can discard all the other private pre states. So, even you know the raw skeleton of a single dimensional structure is extremely efficient because we can pronounce so much and it's also self contained. And you know with that contract being self contained you can just broadcast that contract to anybody, as long as they just have a set of you know headers, and you know some other basic Merkel proofs. You have everything you need. So, the L one state charts resemble the single dimension blockchain and they're linked across the x axis. In order to change anything within this lattice, it would require redoing all the work that created it, causing exponential more resources required to attack each given law level. So, as we add more dimensions, we add exponentially more capacity, and with that exponential increase in capacity and exponential increase in consensus then we now have an exponential increase in attack resources to attack it. And that's because that many more people can fit within a consensus system with Bitcoin, very few people can contribute to the mining, right, or, you know, own a mining pool, right, it's all very well. It's a bit of an old record, right. And that's just because people normal people can gain access to it so the point of the L one layer is to give more of those people those capabilities to start getting in and a part of the consensus and then that starts forming the foundations for a decentralized autonomous organization and the different voting groups that will regulate the entire system or govern, govern access. So, in conclusion, we're currently deploying treating plus plus, as I was saying, the last update using a single dimensional architecture. So, we've got the goal for tritium, and then obsidian, or I mean, and obsidian to reach the highest capacity we can in one dimension before expanding to three. I mean, is going to be adding the shards and the shorted layers, the shorter block layer, but it's going to be adding the shards as a proof into the block, right, so you'll still be able to process over a regular classical blockchain you have to opt into a shard on I mean. So, you know, the main nexus blockchain right now will, you know, you basically have a bunch of proofs in the block and that proof can be a latest transaction a tritium transaction, or a checkpoint for a hybrid network and then you know we add one which is going to be an army chart and then essentially that allows people to volunteer to enter into the sharding system it gives us a nice runway in the deployment to also make sure that there's no screws this everything works as it should. And, you know, making sure that you know we see that throughput and then the final obsidian is going to be that wrapping together where we're going to take that linear blockchain I mean still going to have a linear single dimensional blockchain. But that's going to be on top of these shards, right, and those shards are going to kind of still be inputting, you know, into that but the block is still going to be found with one hash rules off. And I mean, but we still have three consensus mechanisms already read the proof of stake, the hashing and the crime, but then when we get to obsidian. That's the final bow wrapping up the whole three dimensional block and that's going to be a full three dimensional block. Right. And so this has been a very interesting journey developing this technology, and it's been very fun. I used to, I used to, you know, dream about, you know, three dimensional block, what would that look like right and it's been really cool to see it as, and to see the results as they have been. And so I appreciate everybody taking the time for this presentation. My name is Colin central again, and I am the guest the principal architect and lead engineer of nexus and the nexus protocol. You can find out more information about that at nexus.io. And if you'd like to get involved where a community driven project. It's essentially, it's not a corporate, you know, type ICO we never done an ICO it was, you know, mine from zero. The block chain was launched in 2014. And so we've been around the block quite a few times, and that's one reason why we've been able to develop this technology and why I've spent so much time perfecting the architecture. Before we actually gotten to fully implementing it or even sharing too many details about it so this is the first time I've actually done a really deep dive on multi dimensional chaining. So I hope everybody really enjoyed this. I certainly enjoyed the presentation. And I guess, thank you very much.