 This is Jeremy Gaussian. I'm coming to you live from Password Village HQ2 in sunny Austin, Texas. Today's high is 104 degrees. So today I'm going to be talking to you about what it's like cracking at TerraHash's scale. I'm not going to make this a bender pitch in any way. I'm going to focus on the technical challenges that we encounter as we support clients who have, you know, hundreds of GPUs in their clusters. You know, when I first kind of started on this, I kind of thought like 100 GPUs was a really big deal. Now we kind of tend to view 100 GPUs as like meh. I mean, it's good, but it's kind of middle of the road. So, you know, there's a lot of distributed solutions for hash cat out there. But as far as I know, TerraHash is the only one that aims to operate at warehouse scale. And what I mean by that is more so with the most recent version, but with every version that passes, we aim to embrace the concepts of data center as a computer. And while we currently don't have any clients who have an entire warehouse full of password cracking hardware, that's the point where we're trying to go. And we try to enable the software to be at that level even if our clients are not are not quite at that level. So I'm going to go through the history of how I got started in, you know, massive distributed cracking. And I'm going to go through the technical challenges that we faced as we continue to evolve our hashtag distributed solution. So I'm going to start the slide deck. So in the beginning, I kind of got into this because I was messing around with distributed computing, I was doing stuff with MPI, some other most like software rocks clusters, you know, things along those nature. But all that was all CPU based and then when GPU cracking came about. You know, we started looking for solutions that would enable us to distribute GP workloads. So I found a piece of software out of Hebrew University called virtual CL. And I did a presentation on a 25 GPU proof of concept cluster that me and bit weasel put together passwords 12 and Oslo. And when I woke up the next morning after giving that presentation, we had gone viral. We were on the front page of slash dot gizmodo boing boing NBC news, the register like we were everywhere. And in hindsight, I really should have used a lot more than 25 GPUs. I mean, we had more GPUs. That's just what we chose to dedicate to this proof of concept and had I known in advance in any way that the general public would react how they did to this proof of concept. I would have used like three times the number of GPUs. So just, you know, some of the headlines that came out the next morning that I woke up to, which was pretty surreal because I didn't really see this as a big deal. So, virtual CL was cool at the time because it was the first solution that I knew of that enabled us to sort of transparently distribute open CL jobs across the arbitrary cluster. So what virtual seal does is it provides a virtual open CL platform. And the target software in theory doesn't need to be aware it's communicating with a virtual platform. It just sees any, you know, it sees virtual CL as any other, you know, open CL platform just like AMD and video Intel, what have you. So the target software just simply leased against live open CL which provided by VCL which then communicates to a broker demon. And then that broker demon distributes the agent demons and the agent demons which are installed on the compute nodes actually communicate with the real open CL library be at AMD and video whatever. So in theory, all this is transparent to the end application but in practice, it didn't quite work that way. We had to have a special fork of OCL hash cock called VCL hash cat, which had some workarounds for VCL quirks and such. It also required infiniband because it was really latency sensitive and ended up being a very chatty protocol and it required essentially real time communication bandwidth wasn't so much of an issue it pretty much consistently pulled you know less than one gig of bandwidth but the, it was really sensitive to latency, you know, even just a couple of milliseconds was a couple of those things were acceptable but if you start getting up into like double digits of latency. You had a noticeable impact on hash rate. And of course if end of end hardware is expensive is way more expensive than Ethernet. And it was also closed source and it required frequent updates. So this is back in the time, you know, this is 2012 through 2013, where every GPU driver that was released broke some shit. And then we had to go back in and implement workarounds and fixes wherever the fuck AMD or Nvidia broke in their driver that day. And VCO was no exception. So, with VCL hash cat, we had to have our own workarounds for virtual CL. And then the VCL team or virtual CL team had to turn around and implement any workarounds for any new driver releases every time they were released, which is, you know, a pretty substantial job. The problem is virtual CL was created by grad school students. This was a grad school project, and they created this their senior year. And when they graduated in spring 2013, that was it. The project died. So when, you know, the next version of FGL or X was released, you know, I think that was version 13.9 in September of 2013. And then we had to install a virtual CL cluster for a client. This is actually a 64 GPU cluster, which I mean, we're just starting out. So this is actually a pretty big deal for us like 64 GPUs was pretty fucking cool at the time. But you know, we get us installed, we get the hardware built all that shit. And then we go to install virtual CL and FGL or X, and virtual CL just shuts the bed. So I like panic and email Hebrew University, you know, I'm like, you guys really need to update virtual CL for catalyst and FGL or X 13.9. And they, they're like, well, we can't everyone who is working on this project is gone. And intellectual property remains with Hebrew University and mosics. So they can just like, you know, have those people work on it outside of that. I offered to take over the project for them. Because it was kind of essential to what we were doing and I didn't want to see the project die. I thought it was really neat. But the professor over there who was in charge of the project was not amenable to that idea. He actually had pretty grave concerns when we first started using virtual CL for password cracking because he envisioned us turning the world into like a giant botnet. And of course that wasn't really on our agenda, at least not at that time. But yeah, he really didn't like the idea of, you know, his virtual CL shit ending up in our hands. So we were left high and dry without a software solution. And so we desperately needed something. So we conceived the idea of hashtag. Initially we thought about making your own virtual CL clone since Hebrew University was not going to give us the virtual CL source code and we're like, well, you know, fuck it, we'll start our own virtual CL with, you know, hookers and blackjack. But the more we actually sat down and thought about what that entailed, we determined the level of the level of effort for that was just way too high. And the timeframe in which we needed it was way too short. Like we literally had, you know, hardware in house that we're building for clients who now have no software solution because of virtual CL. So we decided to make a distributed wrapper and that's when we came up with the idea of hashtag. Like I said, we had iterate really quickly on this. So we went from a whiteboard session to production in less than two months. And this was basically me and Tom Steele locking ourselves in the office and, you know, ordering like over $100 for the Taco Bell and just banging this out as fast and furious as we could. I had a traditional client server agent architecture. And it was hashtag focused, but it actually had a generic plugin interface. We had plugins for and this is before there was just one hashtag, right? So we had hashtag OCL hashtag plus OCL hashtag light plugins and then we had John the Ripper plugin. And then we had like just a generic interface to where any cracking tool could be made to work with hashtag as long as it adhered to like a standard format that we had defined. And then it also had the ability to run arbitrary commands when idle. This was less for our clients and actually more for me because we were running hashtag in house as well for our, you know, cracking as a service type stuff. And this is back in a time when GPU mining was still profitable. So when we weren't working on a password cracking job, I wanted this thing to be generating coins in the background. So the arbitrary idle accept command was literally just put in place so that people could like mine light coin or name coin or whatever when their when their cluster was idle. So, you know, totally different time. But I mean, it actually worked, you know, it did exactly what it's supposed to do and it was good. And the whole goddamn thing was implemented in this fancy new language called Node.js. Again, this is, you know, early 2013. So like, you know, Node.js was the new hotness and, you know, everyone was starting to move that direction with microservices and everything and that's something is not on the slide but we implemented hashtag version one entirely as microservices. And I shit you not there was like 23 individual packages that comprised hashtag and almost all of those were microservices. And we went that direction because that's kind of like, you know, the programming zeitgeist at the time right like everyone was pushing microservices and pushing asynchronous and, you know, no sequel and all that other horseshit right and you know, we integrated more headaches than not if we just had a monolithic fucking program. You know, maintaining like one server binary and one agent binary is way simpler than like, you know, maintaining 23 individual microservices and making sure that they're all seen to communicating properly and, you know, that the versions are correct, like all 23 packages and there's just a lot of unnecessary headache that was introduced with that that, you know, we, we attempted to manage. But even then like our clients would still kind of mess it up, you know, they're, you know, like they would upgrade like 10 of 20 packages and then like the other ones would just be stuck at all versions or, you know, one demon would die and another demon would know that it's dead so then it would be fine anyway. So yeah, no JS, microservices, no sequel MongoDB all that shit. I tried to use all like the latest and greatest everything because at the, at the time that seemed like the best idea. And also hashtag was more than just workload distribution. So a lot of our clients at the time to thought that hashtag was just a GUI friends in for hashtag. So it's not, it's not just a web UI for hash cat. It actually does like, you know, multi user and, you know, workload distribution with like a really complex four dimensional QE mechanism, like it's trying to explain all this to the client and they're just like, you know, eyes glazed over and like, so it's a web UI for like fucking yes okay it's a web UI for hash cat. But it actually did a hell of a lot more than that. In fact, when you installed hashtag on your box, it basically hijacked it and configured it exactly the way that we needed to be configured it would actually provision the entire server from just installing like the initial meta package. So it's a complete stack of packages they get stack hashtag that's where it comes from. So we had a driver installation driver configuration driver updates and again this is back at a time and even nowadays so kind of a challenge but this is back at the time when GPU drivers were painted in the as to install and then once you've got them installed it was painted in the as to the update. So the fact that we kind of handle this automatically was a pretty big value at also handle XOR configuration even though these are headless servers. And this is actually during a time where when you had a headless server you had to have fucking dummy plugs connected to each of the GPUs. There's probably very few people watching this who actually remember that. But yeah, if you had a completely headless server each one of those GPUs had to have a special dummy plug attached to it. So we had to collect the necessary pins to trick it to think that there's actually a monitor attached and we actually had to run XOR because even though these are headless without XOR we didn't have the ability to manually set fan speeds or GPU clock rates or set like power to profiles or any of that. So, even though this is a headless server we still had to have dummy plugs to think there's a monitor attached still had to run XOR and still had to configure XOR properly. So it enabled us to, you know, control the fans and control the clocks and all the other shit that we needed to do. And then along those lines. Hashtag also did GPU tuning for you and also automatic overclocking. And this was kind of a double-edged sword. When the R9-290X came out, at first it was like entirely unusable because of the way that AMD aggressively implemented power tune firmware, power tune, I think it was 2.0 at the time. But the GPUs would throttle so aggressively that it was like way slower than the 7970 and it was just essentially unusable. And I wrote OD6 config. So I think power tune, power tune is fucking everybody overdrive. I'm sorry. Oh, no, no, power tune is AMD. Power miser is AMD. All right, whatever. So OD6 config, you know, enabled us to properly configure the R9-290X and but it was still like it was good but it was still kind of like mediocre, right? But at that time when I measured it, it was only drawing 300 watts, which is important because our server platforms like the Brutalis can only handle GPUs that drop the 300 watts. You draw more than that. You're going to blow a fuse on the PCI slots on the motherboard. So we had an automatic overclock profile for the R9-290X that we pushed out to our clients that made the GPU draw just about 300 watts. And then a little bit later down the road, about a year, year and a half later, AMD pushed out a new driver update that actually caused the power consumption to go up by a good 50 to 75 watts. And we did not know this. This kind of went undetected by us until we started getting a rash of clients who were seeing dead PCI slots on their motherboards and all of them had R9-290X based solutions. And I finally busted out a kilowatt and figured out what the fuck was going on. But we actually had the ability through all these mechanisms that we had in hashtag at the time to enable like the automatic overclocking and configuration and all that to push out a new underclock profile to all of our clients to try to drop the power consumption down to keep it from blowing PCI slots on the motherboards. So it's both good and bad. Like the fact that we're able to automatically overclock GPUs, you know, to help get the most speed of the GPUs is awesome. But then like in that one specific scenario, it was bad because you start blowing PCI slot pieces. But then it was good that we had this functionality because then we could like correct it through software over the air and automatically correct it on all of our client systems at the same time, which saved us a lot of money because we were, you know, spending hundreds of thousands of dollars on new motherboards. So yeah, and then of course, hashtag also had cluster resource monitoring. So there were some drawbacks. Like I said, the entire guide amp thing was written in Node.js. And while Node.js is asynchronous, it also runs on a single CP thread that may have changed at some point. But we're talking like 2013, seven years ago, and I haven't even touched Node.js since. But at the time, everything ran in a single CPU thread, which was stupid because like our cluster controllers had like a minimum of like, you know, 12 CPU cores 24 threads and we're sitting here stuck in one thread on the server demon. So yeah, any long running methods would just totally block the entire application. And we had massive scalability issues. I think the very first like alpha that we released to clients can only support like four nodes. The second one, we were able to get up to like, you know, nine or 10 nodes. Basically anytime a client ordered a cluster larger than the previous largest cluster we had ever built, we ran into some kind of scalability issue identified some bottlenecks and had to work to remove those bottlenecks. Unfortunately, 90% of the time the bottleneck was the fact that JavaScript was too slow for the tasks that we were trying to do. So we had to reimplement, you know, those bottleneck methods in a faster language and they just shell out to them. So like key space calculation was re implemented and see now calculating key space for a mask attack is like really lightweight. But you know, when you talk about massive dictionaries or rule files, trying to count the number of lines in a file with JavaScript is painfully slow. So that's largely why that entire portion was re implemented and see, and we just like shelled out to that binary to eliminate that body now not eliminate but dramatically reduce it. Um, regex anything regex, we actually implemented we had re implemented in Pearl because Pearl is amazing at regex JavaScript is not very good at regex. The primary thing that we're doing with regex here was like validating the hash lists right so client uploads a uploads a hash list and, you know, some of our clients wouldn't even bucket throwing like 200 million hashes at hashtag right and like one job. But then hashtag has to sit there and, you know, receive the massively large hash file and then validate each individual hash. And then, you know, stuff in the database. That was a massive bottleneck. So we ended up re re implementing all that in Pearl to try to get that down and then the agent, like the entire fucking agent we have just re implementing the whole thing in Pearl. We just got so fed up with, you know, everything there and the only reason we picked Pearl at the times because that's like the only language that we can iterate fast enough and to rewrite the entire damn thing. But yeah, so we also used MongoDB, because no sequel was like, you know, the rage at the time. And out of all the no sequel databases we tested Mongo was the most performant. But, you know, then we had clients do things like throw 200 million hashes at hashtag. And that would cause, you know, some not only just, you know, bottlenecks and, you know, blocking processes, but some like really UI weirdness as well. Like, you know, they would go through the process of creating a job, uploading the hash list, and they would just sit there with like the little spinner for a while until it timed out. And then so they'd go to create the job again because it timed out, right? And then they'd go back like 10 minutes later and find out that the other job had finally completed and was actually active and running. But because they went through the process four times trying to get it to start, now they have like for the same process running on the cluster. So we threw Redis in front of Mongo as an in memory caching layer. And that actually had a significant impact like positive impact. And they enabled us to ingest hashes a lot faster. But still not a very great solution overall. The web UI clients was implemented implemented as a single page application in Angular. It was really clunky. It was really heavy. It was really slow. It constantly caused the browser to freeze if you left it up for more than like 30 minutes. And it just had a ton of bugs. And I think probably the most annoying thing is if you're like a hashtag power user and you sit down and try to use the hashtag web UI, like it was just cumbersome as fuck. You know, if there's no way to move quickly in that UI and it gets very repetitive and very tedious. So there was a lot of drawbacks to having that Angular web UI. I absolutely hated it. And, you know, anyone who was like really experienced with with cracking and hash cat and stuff, they also hated it and begged us not to have that again. And we were just severely limited by what we could do in a browser for non hash formats at the time. We were scraping everything server side. So like you actually upload like a PDF or upload a doc x or, you know, whatever. And we would, you know, scrape the file server side to get the necessary bits, you know, like office to hash cat and PDF to hash cash like that. The problem then became like, what do we do with things like, you know, true crypt volumes or really large sevens of volumes or raw files. Right. Like, are you really going to upload like 100 gigabyte disk image through the browser and just we can scrape off like two kilobytes. Like, no, that doesn't make any sense. So yeah, the browser was an issue and then we implemented access controls, but there was nothing fine grain. We assumed that because everyone operating the hashtag cluster was on the same team that, you know, they all kind of had equal rights and equal access to the cluster and therefore all users are admins. We didn't have like, you know, different roles or anything, it was just, if you have an account, then you obviously have a right to be there. So therefore you are an admin. And I think some of our clients have no problem with that because it's like, yeah, like everyone on this team does do, you know, have equal rights and access and privilege to the cluster. So this is totally fine. Other clients were like, this is absolutely unacceptable. They eventually got to the point where the old version one code was completely unmanageable. Even just implementing what should have been like a minor bug fist ended up like requiring major refactoring. It just wasn't because we threw it together in less than two months, you know, so we didn't think of everything in advance, you know, that would come up and it got really, really hard to implement new features and, you know, try to work around some of the, you know, some of the bugs in there. So we decided we needed an entire ground up rewrite for hashtag version two. And at some point I got the idea to create a commercial fork of hash cat with native distributed capabilities. And at the time I was so fucking stoked on this idea. It seemed like the best idea, like in the world to me at the time. So we set out to do just that. Now this is right before hash cat went open source. And then we started work on this like literally the minute that hash cat went open source. So we implemented and go laying and see with postgres database. Now we chose go laying because of hiking high concurrency with with go routines right. So with no JS everything ran on a single CPU thread and we wanted the opposite of that for version two, right, we're going to learn the lesson on that. So, you know, we wanted, you know, just lots and lots of go routines to run everything so that we use like, you know, all 24 threads plus on the box. And going actually also allows for pretty rapid development so we thought it was a pretty good choice. But we still stuck with the traditional client server agent architecture, you know, we have, you know, numerous clients and then one server, and then have our many agents, meaning that that server is both a bottleneck and a single point of failure. So the first thing we had to do is we had to split hash cat in two. We took everything that had to do with like actually starting a cracking job and, you know, executing an open cl curl on a GPU was placed into the agent and nothing else like the agent was just like bare bones trimmed down hash cat, you know, launch attack on GPU and that was it nothing more. And everything else we placed into the server library. And then for any code that wasn't already in hash cat, such as like the Jason API, and the, you know, proto buffs for agent server communication workload distribution multi user access control and this time we actually implemented fine grained multi role access control. All that was implemented and go, and then we just linked against the sea libraries and this effort actually laid the foundation for the hash cat 4.0 refactor with live hash cat. And then, because of how clunky our old web UI was, we actually ditched it entirely for a cross platform CLI, which made our power users like really happy. But our less experienced users were not very thrilled with that decision. They still wanted, you know, a GUI, which fair enough. So our plan, initially was to maintain both hash cat and hashtag trees in parallel. And then if there was any features that we added to hashtag, or any like new kernels that we wrote, we would just, you know, backport those as we chose to ask at this didn't really work out very well in practice though. So when we released version two, it was in sync with hash cat 3.5, but it didn't stay that way for very long. And of course we had to add a cross platform GUI to the roadmap as well because we had significant feedback from our clients that they really wanted to goddamn GUI and they were not happy that we took away the web UI. It turns out this was the single dumbest motherfucking thing I've ever done in my life. I completely underestimated how much work was involved in creating a commercial fork of hash cat. Like, genuinely, I, yeah, I really don't know how to convey how fucking stupid of an idea this was. We spent like 100% of our time just backporting things from hash cat into hashtag instead of actually adding new features or running new kernels. Like literally months of work, you know, just backporting hash cat commits. And I got really frustrating, you know, it's like, Hey, are we, are we working on these tickets? Are we working on these features? No, we're working on backports. You know, I got so sick of hearing working on backports, working on backports, like, like, when are we ever going to be done with backports? And then when Adam refactored hash cat for 4.0, that also meant that we had to refactor hashtag as well. And we hadn't even finished implementing all the backports at the speed that Adam was, you know, committing things to hash cat. So this was just like, absolutely unmaintainable, right? Like, fuck me. And then there was also this other little minor problem where we picked the wrong HTTP library for the API. And this basically limited the service capability to about 12 nodes. Now, 12 nodes might seem like a lot. That's 96 GPUs. You know, that's a pretty respectable size cluster. The problem is, this particular clients had ordered 320 GPUs, 40 nodes. And you can't really deliver a 40 node client to a cluster with a server that can only really handle 12 nodes. So this required not a complete rewrite, but a damn near complete rewrite ton fuck because we use that HTTP library extensively throughout the code. And there was no, like, drop in replacement. So basically anywhere where we were dealing with HTTP, we had to rip out the old library and then, you know, add in all the stuff for the new library. But they did resolve the issue. It was just really painful at the time to have to mess with that. But the server still continued to be a single point of failure and still proved to be a source of bottlenecks as we, you know, tried to build larger and larger clusters. We kept running into problems with the server being a bottleneck and a single point of failure. And we also had clients coming to us asking us if they could order multiple cluster controllers and cluster them together, you know, either high availability or like load balance them or something. And we didn't have a solution for that, but I started thinking about a solution for that. And again, like 40% of our clients were just completely pissed that we had no GUI. So it got to the point where it's like, let's just rethink or rethink like the entire fucking thing and just like go as far away from these mistakes as we could possibly get. Like we never ever want to make any of these mistakes ever again. We've learned a lot of lessons from, you know, what we've done over the last seven years. So, but we need more than just completely ground up rewilding. We need an entire paradigm shift. Like we need to rethink this entire problem entirely and come up with something that's completely different from what's ever been done. So we need to eliminate the single point of failure at the server. We need to enable like actual infinite horizontal scaling, because again, we're, you know, we always try to strive for like warehouse as a computer. Right. We're looking at warehouse scale computing as like the target that we're trying to hit. There should never be any, you know, limiting factors outside of budget for like how many nodes we can support in hashtag right. And then of course the other big requirement is we have to actually move at the pace of hashtag development. We have to, you know, as Adam commits things to hashtag, we need to have those not instantly available on hashtag, but within a very reasonable time frame. So the very first idea that we came up with was let's not do a traditional client server agent model. You know, and let's not even try to cluster the cluster controllers. Let's just eliminate the cluster controllers entirely and make everything peer to peer where all nodes are equal. And there are no dedicated servers. And the biggest requirement to this is that work can be submitted to any node. So let's say you have like, you know, 20 users and you have, you know, 10 boxes, like there shouldn't be just one node that all of them have to hit. They should be able to hit any node in the cluster and, you know, have the same view from any node. And it also needs to be more user friendly than ever. And also needs to run on Windows. I'll tell you why in a minute. So set out to like figure out how we're going to implement this because we have we have the high level requirements, right? Like we know it needs to be peer to peer. We know all the nodes need to be equal, you know, all that shit. How are we actually going to do this? So pretty obviously we're going to need some sort of distributed state machine, right? And that that's just straightforward and self-explanatory. But, you know, like, like, we just don't message queue at it and some pubs up and, you know, Bob's Uncle. But what about disagreements or, you know, maybe even more than disagreements? What about like an actual, you know, rogue cluster member, you know, or, you know, a broken, you know, cluster node that's maybe not necessarily intentionally malicious, but it certainly, you know, has, you know, the appearance of malicious behavior, right? So I started coming up with solutions for this. And I somehow got into my head, like, dude, what if computers could vote? Right? Like, you just make it to where, like, you know, if there's a dispute, then one of the nodes like requests to vote, right? And so they take a vote and majority wins. And if there's a tie, like we have an even number of nodes in the cluster, then like literally fucking flip a coin, you know, randomly or play rock, paper, scissors. Like, why not? And if a peer doesn't accept the results, like then the rest of the cluster just shuns that peer. And I started thinking about, like, is this, is this actually like a thing? Like, can we actually do this? Am I like completely fucking stoned? It turns out that I really don't have to put that much more thought into it because this already exists. Like what I thought up is actually a thing called the Raph consensus protocol. Now, it's cool that it already exists because that, you know, means we don't have to do a whole lot to implement it. But it's also cool because it kind of validated, you know, my ideas and theories on how to make this work. And told me that I wasn't just completely fucking insane. And we found several implementations of the Raph consensus protocol. And the one that we ended up liking the most and settled on is ACA, which runs on the JVM that's written in scale. Now, I hate Java. I have shun Java for damn near my entire life. You know, I completely condemn Oracle JVM. But there is open JDK. We don't have to use Oracle. And we don't have to write in Java. We can use scale in Kotlin. And since I've started to work on this, I actually really like Kotlin. I don't really have too many negative things to say about it. It's actually kind of fun to work with. And then ACA also has some plugins or modules that, you know, help us even further, such as ACA persistence. What ACA persistence does is it takes our distributed state machine, which is normally resident memory and persists it to disk. And then ACA distributed data, of course, would be like what implements the distributed state machine. And then on top of all of that, we built a custom multi-cast base protocol to enable cluster nodes to automatically discover each other on the network, and then automatically join the cluster. Now we made this opt-in because obviously that's kind of a security risk if anybody can just join the cluster at will. But for some scenarios like where all of your nodes are on a dedicated subnet that's properly firewalled off, and you have proper layer 3 and layer 4 access controls, you know, this would be a solution that'd be really handy. So, you know, again, thinking about warehouse scale computing or, you know, data center as a computer, let's say you just have racks and racks and racks of compute nodes. You would just unbox one, literally throw in the rack and power it on, and it would discover the cluster as soon as it's powered on and join the cluster and start doing work. You know, that's kind of the vision that we have for this. And of course, since it's all peer-to-peer, you know, that enables it to actually be just literally that simple. But of course, you know, the default is you manually specify which peers you want to be part of this work. Okay, and then in April this year, we had the unique opportunity present itself to purchase Lothcrack. Dildog approached me and then we entered negotiations and we purchased it early this year primarily for the GWT, which is written in C++ using QT as the windowing framework. And while it's not currently cross-platform, that's a really easy platform, or that's a really easy language and library to make cross-platform. So instead of hashtag version 3, we now have Lothcrack version 8. And while Lothcrack 7 uses John the Ripper as a backend, we've ripped all that out and we are in the process of integrating what was hashtag version 3 as the new backend for Lothcrack 8. And the final thing we're doing with this is we are dropping the TerraHash hardware requirement. Traditionally, you know, we do not let people purchase a hashtag as a standalone product. We say like, you know, the only way to get hashtag is to buy a TerraHash appliance and it comes pre-installed and that's the only way to get it. But Lothcrack has always been sold standalone. So we are changing the model now to where, you know, keeping with Lothcrack being sold standalone, we're going to be selling, you know, Lothcrack 8 as a standalone product without the TerraHash hardware requirement. So yeah, that's where we are now with development. That's how the product has evolved over time. And that's some of the challenges that we have faced trying to obtain our goal of true warehouse scale computing. Thank you and hope you've enjoyed this.