 Hi, Merrick, so this is a bunch of ... I'm just going to talk about things that we've done in a little bit coin, kind of since the beginning, but also been doing a lot more recently in terms of making a scalable node implementation. There's a lot more to the bit coin and don't have a lot of time, so I'm just going to kind of quickly go through and really just enumerate things that we're doing. Some of them are kind of mundane, but some of them are really different, and anybody who's ever tried to make or understand a node will scratch their heads and go, what the heck, how do you do that? Anyway, so first of all, what's scaling? Add hardware, hopefully get a linear improvement in whatever you're trying to increase. More hardware, more performance. Okay, now how do I get back? There we go, got it. There we go. Okay, so Amir actually started a bit coin. Who knows Amir Taki? All right, yeah, Amir's awesome. Those are the three principles he laid out on the first post in Bitcoin talk about what we were after. I focus your attention on the center one there, the second one, scalability Bitcoin built today for the future of mind. And so we have a definition and objective, and I'm just going to go through the different components, see if I can zoom this, that's not touch screen, is it? No, all right. There we go. Wow, this Prezi thing's cool. I'm a PowerPoint guy and I'm trying this Prezi thing and it's kind of freaking me out. All right, so gray boxes are external dependencies that the Bitcoin's a set of libraries, but it also has command line implementation for node and client side stuff. We call it the Explorer. Left side is client stack, right side is server stack. Gray is external dependencies. We worked really hard to shrink those down. I don't know, William, do you remember how many, how hard it was, right? This is as small as you can possibly get and do something like we're doing, as far as dependencies. And the dotted lines are optional. So they're only used in a very narrow set of circumstances. You don't even need to build them. And the blue boxes are the libraries and this is a build system repo down here. It's just for the maintainers. And you'll notice there's three libraries that are the same on both sides. The base library, Libitcoin, network and protocol, okay? So I'm just going to be talking about the server stack, the right side. When it comes to scalability, it kind of affects both. So we have a database, there's a repository called Libitcoin database. And I apologize, I'm going to go pretty quickly because there's a bunch of stuff here. Okay, so the database is not a database that you buy. It's a database that we make, it's memory map files. There's actually, I'll talk about that, will you? What we end up with is logically contiguous byte array. So just a big array of RAM, essentially, and it gets mapped in and off the disk by the operating systems. Virtual memory, paging systems, very efficient. And the most recently accessed parts of the data are generally memory resident, if you have enough memory to store the entire chain, it will probably not all be stored in memory because some of it just won't be accessed, but it can be. And I test with a couple of machines, Windows machine, the Linux machine that are 256 gig RAM and super fast. And it's just nice to see the whole block chain just and the server and indexing of the payment addresses all stored in RAM. So add more hardware, get more performance, again, that's what we're after. Up until you hit the limit of maximum RAM performance and then you start hitting bottlenecks in other areas. It outperforms explicit cache, right? So other implementations will have a cache for this and a cache for that. And I'm gonna go over that a little bit. There's all kinds of caches. Maintaining the cache costs money. And you don't want a cache if you don't need it and we don't need it. The block chain is a cache, right? Everybody sees the same chain. So it's a public cache of data and you can dump it if you want. Okay, so there are some limitations and current implementations which I'm gonna, we're probably gonna spend some time on in our V5. Our V4 is coming out probably in the next few months. One of these is right flushing. In other words, you write some stuff to the RAM. You gotta flush it to the disk before you shut down or you gotta corrupt database, right? Used to be you didn't know when the database was corrupt. You could shut down, hard shut down and come back up on your virtual server and you wouldn't know. And now we actually know definitively. We have a very reliable knowledge of whether you've actually shut down with an unflushed write, like a hard power off or something. But we also have a configuration option to allow you to flush after every write. The problem is if you're flushing it to disk after every write, you really slow down like initial block download. Significantly slower, maybe 5x slower, right? On the other hand, once you're fully synced, it's hardly noticeable. So we'll probably do some dynamic configuration there to make that a little bit more optimal for home users who might be more likely to hard shut down. For servers, though, it's pretty good and very reliable. Okay, so we have the reliable detection. Okay, so we go into the next aspect of the database. It's append only. We only write to the ends of the file. So we don't delete data from a file and squish it up and refactor the file. We just write and these are a mirror's original implementation things. So all new objects are appended to the memory maps. There's like seven files, they remain indexed indefinitely. So you index something by its hash, it's always there. Even if it gets reordered out, it doesn't matter. It's still valid, it's just not on the strong chain, right? Objects have metadata, so this is kind of came along in V3. We update state on certain objects to reflect, say, the height that their transactions confirmed at. But we don't delete state from the files and so you're probably thinking, okay, well, that's going to lead to a lot of bloat and fragmentation. You're going to have to defrag this stuff and that's going to be an expensive operation. Actually doesn't lead to noticeable bloat at all. We've run these things for months with full transact. I'll talk to you about what we do with memory pool, but just dumping tons of data on the disk. Everything that's coming at us across the chain that's valid, we put on the disk and basically most of it gets confirmed and it has no real consequence. But in the amount that maybe is fragmented, in other words, kind of dead data that's still indexed, doesn't end up mattering. But if you want to defragment it, what do you do? It's a cache, just resync it. You can do that in two hours, so not too bad. And if you have a checkpoint in store, you can obviously do it much quicker. So defragmentation cost is fully deferred. We don't defrag. If you want to do it, you can do it, but we're not doing it dynamically, so we're not slowing down your validation and all sort of stuff. Okay, I think I covered, yeah, resync to defrag and then. So we don't support pruning. The objective is not to make non-nodes. I mean, if you're gonna be a node, you can't be pruning stuff. You can't support sync on other nodes. So that's not a goal, but this technique wouldn't be too friendly for pruning because how do you get rid of the data if you're only appending, you'd have to write some defragment or really hurt performance. So what's the cheapest resource you have on a computer? Hard drive, right? So what do you optimize for? Not the RAM, you optimize, I mean, not the hard drive, right? You don't care how big it is, really. I mean, not in the range as we're talking, cheap. So now we can start getting into more interesting v4 stuff, concurrent write. The database can support parallel block downloaded write. We can write all blocks from multiple peers to the store at exactly the same time on multiple threads. So the system can run, the node can run on one thread. It's asynchronous, but it can also run on, I run on 64 threads. It's great. I've run, not normally, but I've run thousands of peers as well. So we have certain things that you do have to guard against. If anybody's done concurrent programming, and sorry, I'm talking fast, this is kind of technical stuff. But memory allocation rate is configurable. So in other words, when we allocate memory to these files, we pre-allocate a big chunk, right? You can configure what's the ratio of that reallocation. You can also just cause it to allocate the entire size of the blockchain initially, and you never have to reallocate any memory. So when a new object comes in, you have to reserve some memory to write that object. That's just a little math. There's not even any reading or writing going on. We just lock, do some math, give you the data. So there's a lock there, remap. So the operating system decides it wants to move all the memory to somewhere else where it has more space. All the pointers become invalid. Everything you're doing on all the other threads goes haywire. It's a disaster, right? We actually had this problem in earlier versions. And so that's guarded. If there's a remap due to a reallocation, everything that's got a pointer is now either working or ends up locked until the remap occurs, and then it continues. So it's safe, it rarely happens, and it's very fast. So it's kind of inconsequential. Metadata updates. So if you're writing data into a transaction, like it's been confirmed at this height. We already wrote the transaction, now we're gonna say it's confirmed. We have to guard that because something could be reading the transaction at the same time we're writing, so there's locks there. But not during the heavy load process where we're just downloading all the blocks, nothing's being written there. But that's the extent of locking in the database. Everything else is atomic pointer swaps. So we put an object in the database, we allocate some memory, we write the object to memory. Nothing can see it yet, it's not indexed. We then create a pointer to it. That pointer update is atomic, it's locked. After that, everybody can see it and it never goes away. So we never have to worry about any problems with concurrency. Okay, and so there's no defrag, so all data remains valid. If you reorg a block out, the block's still there. It's just no longer what we call confirmed. Okay, so fast, right? So this is another original aspect of the database, and this has become better over time, but in theoretical terms, it's the same as it's always been. We have two types of tables, hash tables, and arrays. Constant time, right? Technically, hash tables are constant time best, normally in worst case linear, but what we see because of the evenness of the spread of the data, because it's all indexed by hashes, by block and transaction hashes, and then re-indexed as 64-bit hashes. Is that the data spread pretty well and depending on how you can configure the bucket count for each hash table, so actually in our config volume, you start up the first time, and you just, more buckets, less buckets, you'll have more or less collisions. And so you can kind of tune it yourself. And we did that so people could kind of help us get the optimal bucket sizes for different chains. So you have very low, normally we end up with like one and a half collisions per entry in the hash table, which is pretty low and it keeps the hash table header size small. But anyway, you've got block height indexing in arrays, which is lightning fast, constant time, and everything else is indexed by hashes, which is lightning fast. So query speed on the Bitcoin has always been faster than anything out there. But what we've worked to do is really compact the size of the data and eliminate locks on the data, as we talked about before, so everything can be parallelized. So I'll read, write updates to the store, our constant time, and I kind of covered everything else. OK, so linear growth, this is pretty much the case with most implementations, the state size doesn't grow non-linearly. It does to a very small degree. So what you want is you add more data from the network, you want the store to kind of grow linearly with that. You don't want a kind of a 2X, 3X type growth where every time you add something. So we actually do really well, I did some checking recently synced with Bitcoin D on BTC mainnet. And if you don't use transaction indexing, which is an optional switch, their sync is faster than if you add their transaction indexing. But we always index all transactions. So it's kind of not fair. But if you look at our store, it's smaller now than Bitcoin D, even with full transaction indexing, which I didn't actually do in that example for them. And the server, which is a layer over our node, indexes all payments. So every single payment that occurs in Bitcoin gets indexed so that you can query it for wallets and things like that. So Electrum X does something similar. We're actually working to become fully compatible with Electrum. I've been talking to Thomas V about it a little bit. Some of the people working on it independently. But we'll end up, I think, pretty soon with API compatibility. And I looked recently at the Electrum X state size, and we're actually about the same size, which is pretty good. So in other words, all this performance, what I'm not showing here really is I'm just showing important scale differences. I'm showing that we haven't made compromises on the store size to achieve these performance gains. We actually are doing better than other implementations. And it doesn't matter how many UTXOs are, it doesn't affect their state size. And we'll talk a little bit more about that here. OK, so now we move on. That was database. This is blockchain. Everybody with me so far? Am I going too fast? Am I running out of time? Yeah, OK. I'm probably going to run out of time. OK, so blockchain is implemented in these two libraries, Libitcoin, Blockchain, Libitcoin consensus, which is optional. That's really just taking the consensus libraries from the Bitcoin de-implementation and making it so you can link them. But we have our own, and that's generally what I test with. So we have these other characteristics that are interesting in terms of scale, and this is where I think it starts getting really interesting. So there's no pooling in the Bitcoin. There is no transaction memory pool. There is no block orphan pool. Let's talk about caching as well. So how do you do that? We'll just write everything to the disk. So we validate it, write it to the disk. And in v4, we've added a DAG directed as a graph of transaction metadata so that we can rapidly generate a block template. But that's only for the block template. It has nothing to do with validation. Actually, it will give us some further optimizations and validation as well. So yeah, we just save them right to the store. And so when the block comes in later and we've got those transactions in a block, they're already validated and they're already written. And if anybody's really smart about this stuff and running, well, geez, what if there's a soft work and the rules change and you validate it? I mean, yeah, we deal with that. So no time to explain. A block orphan pool, that was really costly back in the day. I don't know, William, if you remember that, it was a nightmare, right? That's gone. That's gone. So we have a, similar to the transaction DAG, we have that big O of 1 there. That means that anything in the DAG is indexed by hash in constant time. So even though it's a graph, if you want to find out whether we already have this transaction or we already have this block, it's constant time lookup because it's kind of a complex data structure that has a hash table creating a tree. So we have a header pool, which is pernable, kind of necessary. And what that does is that maintains the reorganization aspects of Bitcoin. So headers are coming in. It's header first. So we're not doing this with blocks anymore. We're doing this with headers. Heads comes in. We're trying to figure out where the strong chain is. We're doing that in a data structure that has nothing but about 100 bytes per block that we haven't written to the disk yet. As soon as you write it to the disk, it's no longer in this data structure. So no transactions in memory, no blocks in memory, a small amount of headers at the tip in memory so we can do reorgs. Generally, that thing's empty. There's like one as it's transitioning through. So we get into downloading blocks. So that's great, right? We get 250,000 blocks in the mempool, and blockchain info is crashing. And I go over to Cancoin, which runs a block explorer on our node. And it's just lightning fast and blazing through that. And we had a long chat about how funny this was. It doesn't matter how many blocks are in the mempool because they're not in the mempool. They're in what we call a transaction pool, which is on disk, which, by the way, is in memory, right? I already talked about that. So you get the model right and everything works out. So we're uncashed. There is no UTXO cache. There's no UTXO store. We just store the UTXOs. They're just part of the transactions that are on the disk. They're easy to look up because you look them up by the transaction hash, right? And then the offset. So you have constant time look up for one. There's a fixed size, so you know, constant time look up for the output. So no big deal. Lightning fast. So as long as you're not pruning the store, UTXO count is irrelevant. And we don't cache signatures or script because, again, we validated the entire transaction and written it to the disk. The whole thing's cached, right? Cached. Unconfirmed transactions, again. So the unconfirmed transactions are immediately validated and stored. If we're going to transition on a consensus fork, which might happen like 100,000 blocks or so, we have to do revalidation of that small number of transactions that have transitioned across validation rules. OK. And so transaction count doesn't matter. Everything's simple, great. So those are some of the big ones. This is a little bit more esoteric. Anybody that's ever kind of worked in a node has to deal with this fact. You have to look back through block header history to do things like look at the versions on the blog. Yeah, look at the versions and the bits field so that you can do things like soft work activation and proof of work calculations retargeting, et cetera, right? So those can be very costly lookups. In testnet, you can go back 2,000 blocks and read all these every time you're validating a block, right? So we don't do that. We do this thing called state propagation. We maintain chain state in a data structure that's very small and has the necessary data going back. And every time a new block gets built on the previous one, we roll the chain state forward. We propagate it forward. So we take the new data, we push the old data off the bottom of the stack, and now we just keep pushing it forward. So we never need to hit the disk. And that basically allows us to do validation rapidly against the transaction pool, new blocks, and any block in this tree that we're building in memory, the header tree, has chain state for itself. So any block comes on any of those branches, we immediately, we don't have to go back to the disk to find anything. So the goal is to never hit the disk, even though it's not really a disk, it's just random. So this is really cool. This is something I'm working on right now. And this was actually proposed by one of the Electrum guys I was talking to. Concurrent validation. So we're doing, and before we're doing continuous concurrent or parallel block download. In other words, we get the header, we get the strong header chain, and it comes to the current time frame within, say, 24 hours, whatever is configured. And now we start downloading those blocks. So say Bitcoin, Mainnet, 520,000 blocks. Send them out to all your peers, all the hashes, start downloading them all in parallel. There's no blocks in flight, none of that complicated nonsense. We just divide them up and ask the peers to give them to us. So as they come in, we just write them to the disk. And once we've got the whole chain, we've got the whole chain. So we kind of move up the point in the chain that's complete as we're getting them filled in. And if we're validating as we're doing this, depending on where you have checkpoints or milestones set, we just move the validator up. So initially, that validator was moving up linearly, right? You can't validate a block until you validate its previous block. But that's not actually true. You can do the most expensive parts of the validation in most cases in parallel, without looking up the outputs for the inputs. So that's pretty cool, right? So that's what this is. Not only do we have concurrent block validation, but we have concurrent block download and storage, which obviously requires a parallel store, where you can accept all the data at the same time. But we also have concurrent validation going on for the most expensive part of the validation. OK, so this stuff is going to get less interesting. So what does this mean? We're not at the point where I have really good objective metrics to compare. But I give you a feel for it. So again, I have this Windows machine, a Linux machine. I was able to fully sync mainnet a couple of weeks ago consistently several times on my Windows machine in three hours. On the Linux machine, it took about two hours. There's differences in the memory map on the Windows machines and also the serializer that we haven't figured out what the performance issue is with them. But they bring us down a little bit. And those are fast machines. But on the Windows machine, to just compare, Bitcoin D took over 12 hours. And it's not validating anything, right? It's using the well-known blocks, whatever. Ours wasn't validating all the blocks, either. That part is still being hooked up. But it's a pretty good apples to apples comparison. And then remember, we're indexing all transactions in that case. And they're not. Once you add in the transaction index, which it becomes even much slower. So currently, we're apples to apples maybe four to six times faster to do additional block download. And this is without any optimizations. This is just initially getting it working. Do we want? So pretty cool, running out of time. So the network has, for a long time, been based on Boost ASIO, Pro Actor. It's asynchronous. You can run on one thread, but it's still asynchronous. This is the parallel so you can configure it to run as many threads as you've got, physical threads as you've got, or you can just configure it to run on one if you want. And the parallel block download takes advantage of that. So the more threads you've got, the more blocks you're downloading in parallel. But again, you can have thousands of peers if you want. Generally, we configure for eight outgoing. And so as we're doing the parallel block download, you're going to get actual concurrent downloading and storage if you're running on more than one thread. Work is going to be divided among the peers. And so you always have this problem like, what if I have slow peers, and that's going to really drag me down. So we worked out this way to determine what's a slow peer, standard deviation. So we track the deviation of all the peers. If one falls below, by default, I think it's negative 1.5 deviation from the norm. We drop it and pick up another one. You can configure that. If you didn't want to drop peers at all, you'd just make it two or three, and it won't drop peers. So channel finishes up. It's got all its blocks downloaded. It needs more work. It just steals work from the channel that's got the most peers downloaded. And then that peer drops, because we don't want all those blocks coming in redundantly. And then it continues on. So also, since we're dropping peers pretty frequently, because they're slow, they're not responding, we've stolen work from them, we do a batching of connections. And this is true, this is continuous. It's not something we do during initial block downloads, just to see how the system works. So batching is, you can configure it, by default, I think it's five. So every time we go to connect to a peer, we grab five addresses, we broadcast it out in parallel. And the first one to come back gets connected, the other gets dropped. And that's because, on the average, one out of every five addresses in the address pool is good. So it really doesn't affect the network at all, on the average. But it makes your node run a lot faster. And finally, the server. So the client server protocol is implemented in the Bitcoin protocol. That's kind of a zero MQ abstraction. And the Bitcoin server exposes a query interface over the node. So really, that's all the server is, is configured to zero MQ interface over the node implementation. So you've got zero MQ, which is itself a synchronous protocol independent, by default. The endpoints are TCP, but you can make them whatever you want. You can run in proc, out of proc. And extremely low overhead, and it does connection management for you, timeouts, and keep alive, things like that. Extremely efficient, fastest stuff out there. So this is kind of analogous to the JSON-RPC interface. But this is internet scale, extremely fast. So we have constant time lookup going directly to a zero MQ interface. CurveCP, so CurveCP is, there's an implementation called CurveZMQ, and that's what we use. CurveCP is kind of like the equivalent of TLS, who likes TLS. So there's no HTTP, there's no web servers, none of that stuff. This is just an ECDH implementation over Sodium. Modern crypto, elliptic curve, 32 byte keys in the config file, that's it. You self-generate the keys using our Bitcoin Explorer app, and that's it. You get everything you'd expect, server identity, privacy, if you enable it, and client off if you want, so you can just do that with your config file. Workers internally to the implementation are decoupled from the endpoints. So there's a worker for every endpoint. And on the query endpoint, which kind of does the most complex work, it's not just pushing, it's actually responding to requests. You can configure as many threads to spin up as workers as you want. I've tried it, I can't generate enough load to make more than one thread worthwhile. So millions of requests, hardly notice it. So I'm going to kind of skip over this, because I'm almost out of time. Oops, queries, right, blocks. So there's query endpoints, and then there's block notifier, and transaction notifier, and there's a heartbeat. This is the kind of really important thing. So when you're ranking a high-scale server, one of the problems you run into is your clients don't always want to accept your data, or they don't want to give it to you fast enough, one or the other. So you end up getting throttled by your clients. Not good. So we use one of 0MQ's capabilities called high water. If you're pushing data out to your clients and they're not responding, you just drop the data at some point. You have to. Otherwise, you're going to fail over. And that's not what you want. You want the clients to suffer the cost in this case. So it's absolutely necessary, and you don't see a lot of implementations of clients or applications that do this properly. You can configure the limits. So you want to start dropping messages once you've queued up $100,000 or a million or whatever rate you can do that. But what we do is we provide the client a sequence number for everything that's droppable as sequence. So there's a counter in there for the connection of it's query-based or for the broadcast, for globally, if it's push broadcast-based. So a client can see if it's lost something, if it hasn't gotten it, and go back and get it again once it catches up. So it's perfectly reliable, but fast and doesn't drag down your server. OK. So I used up all my question time. Perfect. All right. Woo! Yes! Woo! OK. Everybody's still here? Firehose, there you go. Whoa. All right. 0MQ. I mean, God, Lord. OK. Are there questions worthy of the presentation? Yes. All right. Let's do it. What did you say? Thank you very much. Very interesting. A bit of a practical question, maybe. But is it compatible with Bitcoin Cash or plans to do that? That's good. I figured that would come up. I meant to actually mention that up front, but I didn't have it on my slide, so it gets lost. So if you go on the Bitcoin site under the server wiki, there's a server repo wiki. And under there, there's an FAQ. And in the FAQ, it kind of got policy on forks, splits, all these things we could work on. And the policy has been, we work on BTC stuff that's active, like we don't do soft forks before they're actually active and being used for a while, say what came out about three months after it was activated. And it's expressed as a resource limitation on our part, right? So it's not a political statement. But we've worked really hard. I don't know, William, if you were around, when we did test net, we used to recompile, right? I got rid of that around the time you were still using it, right? And they really sucked up to recompile. They just do a little fork. So we made that configurable. So all the soft forks and hard forks that are on BTC are in the config file. You can turn them on or off. So if you want test net, you turn difficulty off. If you want regs test, you turn retarget off. If you don't want to accept seglet, you just turn seglet off, right? And you don't broadcast the seglet service bits. So some people using the Bitcoin recently said, hey, we made a Litecoin fork. So Cancoin, and I think maybe BitPrim did a Litecoin fork. And then I think BitPrim did a Bitcoin cash fork. And I think Feathercoin is going to build their whole thing on the Bitcoin. And they've got some pretty simple forks against BTC. And there's a couple others, too. So what I've done is told people, if they implement the forks and just give us the code, and get a branch that works and everything's good, then we'll incorporate those forks as configurable forks just into the main code base. So it's not a recompile now, it's just a configuration setting change. So you can do Litecoin test net or Bitcoin cash test net. As long as you use the same test net rules. So there's only one thing I have to do, or we have to do, to make that possible and easy. So it's probably going to ship, I've said it will ship in before, which is all the numeric parameters, not the forks, but the genesis blocks, the big number, the time between blocks, things like that. There's a stack of a list of numbers that go into a header file that we need to put into config. So once we've done that, now we can accept the code forks. If they're reasonable and small, they're not going to corrupt the whole implementation with changes. Then yeah, it's pretty straightforward. So that's the plan, is that we will be able to, with a single binary on the same machine, fire up using a command line argument to a config file, which implements an entirely different coin in, say, a different directory. So if you're doing like a mobile wallet, which some people are working on, one binary. Or if you're doing testing, you don't have to recompile into Red's test for your Bitcoin cash or whatever. So that's pretty cool. So my idea is I want to provide a platform stack that everybody can use. And the kind of main requirement is it just doesn't drag us down into not being able to get things done because we're supporting too many things. Sorry, a long answer, I know we're out of time. Wow, header files for whatever consensus you want. OK, cookie cutter blockchains, any other questions? Help me here. Somebody, give me something. Come on, Paulie. Where's Paulie? Back. Oh yeah, Paulie, we're coming for you. So as someone in the mobile wallet space, accessing TCP sockets really sucks on a lot of different platforms on mobile. Any talk between you or the electron guys to put in web sockets? OK, good. We didn't talk about this ahead of time. Seems like my straight man in the audience. So one of the things that one of the companies that user software did is they put a lightweight web socket implementation over the server. So there's a 0MQ client for just about every language you can possibly imagine, even one for JavaScript. But in the browser, you'd have to add a plug-in. So that's the one scenario where you need something else. Plus, we wanted to be able to do an admin interface like you can do with a router with an integrated web presentation. So there will be, in v4, a single file web socket implementation that it's embeddable. We just put it in the source for the server. And all the endpoints are exposed through web sockets. Hey. It was really cool when I saw that. And so Cancoin, by the way, if you want a really high performance web front end on top of the Bitcoin, go to Cancoin. They have a Voyager. They have a thing called Voyager, which is in a block explorer. There's several block explorers built on Bitcoin, but I don't want to diss any of the other ones. But this is amazing. They did it right. They built a decoupled front end. And it queries directly into the 0MQ interface, which goes directly into the RAM of the store. And it's the fastest block explorer you'll ever see. It's amazing. They did a really good job with it. So if you want something like that, like for industrial use, that's what I'd recommend. And it's open source. You can grab it. Awesome question. Any other questions? Eric, I think you beat us up. All right. Thank you. Eric, a round of applause. Woo. Thank you very much.