 I'm watching myself on the stream, there we go. So hi everybody, I'm Hank Leininger, co-worker and fellow organizer of some of the events like the Bastard Village and the CrackMeatCamp contest. So I'm going to give a talk on a defensive technique and technology and open source tool that we ended up developing after sitting around thinking about all these interesting all these ways that we can make password cracking easier, we asked ourselves, okay, well can we turn those into ways to make password cracking harder? The PL didn't watch is we're going to learn from successful attacks and use that to inform us of how to make better defenses. Just a little about me, I came up wearing an assistant min hat and got into security that way, still enjoy building stuff about as much as I enjoy breaking them. And yeah, I sometimes do talks and build things. And Pathwell, the topic of this talk. Like I said, it came sort of as just a brainstormed idea, but my co-workers actually turned it into running code that we released. So kudos to them. So the audience probably includes people that can't spell hash cat and people who wrote hash cat. So I'm going to give a very oversimplified overview of a particular kind of cracking technique for the sake of people who for whom this is not second nature. And those of you who know that I'm oversimplifying, don't at me grow. So we have a bunch of classic defenses used for password cracking. I got to keep in mind that the slides update way slower than I do. All of these standard old ways that we try to make passwords stronger. Yeah, they basically don't work. There's a bunch of talks that go into a lot more detail than I'm going to. But password complexity rules, they helped a little when they were newly enforced. But it turns out humans make very similar decisions in the face of similar policies. And password crackers have figured out the way humans respond to being given certain rules. And we can guess where everybody's going to end up. As soon as we know the password policy of a company, we can guess what a lot of those user's passwords are going to look like. And if we know what their passwords were a year ago, and we know their rotation requirements, we can probably guess what they are today. Just to illustrate some examples, I want to throw some numbers up here, which are rough. Again, don't at me. But suppose you built yourself a budget password cracking rig, and congratulations, you can exhaust the entire eight-character key space in a day or under a day. But once password lengths get up to 9, 10, 11 characters, that becomes infeasible. However, nobody actually tries to brute force the entire key space. We'll figure out what's the most relevant bang for the buck places to spend our compute time. So the math and the proportions still apply, but we're not going after the entire key space. We're going after the most relevant tiny slice of the password key space. So one way to divide up that key space is what the hash cat calls mask attacks, where you say in this character position, it's going to be an uppercase letter. I want you to try every uppercase letter. And in this position, I want you to try every number and so on. And usually you don't do this for the entire password. You do it for I want to take a dictionary word and then tack this pattern onto the end of it. Or I want to take a dictionary word and apply rules that morph it in these ways in these positions. So again, just for the sake of figuring out how to discuss this and do the math, we'll use this way of looking at it in this notation. So suppose you wanted to try every possible password of a specific mask. And so the math that I've done here is if we have four different character classes, uppercase letters, lowercase letters, numbers, and special characters, each position of the password could be one of those four. So essentially, the password key space might be eight to the or 95 to the eighth, but the topology key space is four to the eighth. And if we have an 11 character, it's there's, you know, there's four to the 11th different topologies. If we pick the single topology of uppercase number, lowercase, lowercase, lowercase, whatever, number, number, special, we multiply out the possibilities in each of those positions. And we say to exhaust that entirely, which again, we would actually combine that with word lists or other things, so that we didn't have to blindly guess, we would guess the most frequent, the most likely ones. In any case, if you did want to exhaust the whole thing, there would be 265 trillion possible combinations. And that hypothetical cheapo cracking box that would be totally infeasible to try to reinforce the entire key space could do that one topology in about half an hour. So the question we have to ask for the sort of evaluating the effectiveness of hash mask based attacks is do users buy us towards specific mask pattern or specific topologies? If we could guess which patterns were overused by users, then we would focus our attacks just on those. And we wouldn't have to we wouldn't have to worry about 99.99% of the key space, we would only have to worry about the parts of the key space where the users are all clustered. And again, we would actually combine this with other smarts. So in actual fact, our work would be even less. So at our company at our day job, we do a lot of password audits, both regular routine recurring ones and one-offs when we do a pen test. So we had a whole bunch of data to look at. Spoiler alert, yes, of course users cluster into common topologies. That's why mask attacks are worth doing. But we wanted to get some science and some numbers behind that. So we analyzed the results of having cracked high percentages of a number of different organizations. The first case study I'm going to throw out numbers of we cracked, you know, around over 99% of current and historical hashes. And then we broke those. We attacked them not with a mask-centric focus. We attacked them with all our regular techniques. But then we did the crunching in the math and we said, okay, how many different topologies were in here and what were the most frequent and how frequent were they? Out of those quarter million different logins and hashes, there were only around 7,000 unique topologies in use. And over 10% of the population use each of the two most popular, actually three of the three most popular topologies. So if we knew nothing about this organization whatsoever, other than that uppercase lower, lower, lower, lower number number was going to be highly successful. We would crack over 10% of their population without a second thought. And if we did all the top five, we would have gotten almost half the company. If we did the top 100, which is still a tiny slice of the overall key space, we would have gotten 85% of all hashes. And by the way, you can also determine, you can discern from the relationship between the number one most popular and the number two most popular and the number three and the number three and four, sorry, three and five, that this organization made their password policy stronger sometime in the history of the dump that we have. Good for them, yay, stronger password policy. No, totally worthless because all their users did was made the word inside just one letter longer and added, you know, another lowercase to that run. So trivial for us to figure that out and go after. Now, this is a graphical depiction of how frequent the most frequent topologies were. Pay no attention to the labels on the X axis. They don't mean anything other than the unique identifier that we assigned to each particular topology. The point being, the most, the few most popular topologies, the ones that we've crowded on, you know, we've lined up on the left hand side of the graph were, you know, 10% or more of the population. And then it tailed off very, very quickly and steeply. So there were some users who were indeed special snowflakes, but they were like a handful of them way off to the right side of the graph. They don't help the organization as a whole. Stay safe because the people on the left hand side that are all grouped together, they're easy prey. Nor the organization kind of a similar story, slightly larger hash pool, slightly lower crack percentage at the time we did this crunching, but still well over 90%. So we were confident that the data that we were the conclusions we were going to draw from this data were valid. By the way, to preempt the question that I would ask if I was watching this presentation, all of the subsequent stuff is based on number of cracks compared to total population, not number of cracks compared to total number of cracks. So for instance, when I say that 19,000 people using or hashes using that one topology is 4.3%. I mean, it's 4.3% of 449,000, not 4.3% of 419,000 that we were successful at. We don't know what the 30,000 were that we didn't crack, but we know what percentage of the total the ones we did crack are. So in this organization, the numbers aren't as bad, but it's still a very significant relationship. And it's also a once again, the case that somewhere embodied in the password history that we captured, they changed their policies to make them stronger. But the way people responded to it was, oops, was just to add another special, added another character at the end, which was a special, because they went from requiring eight characters, three or four to requiring nine characters, four or four. So users was like, oh, okay, I'm going to put exclamation point at the end. Once again, didn't stop us, you know, from cracking the new ones. This is the graph for that organization. It's not nearly as bad and ugly, but it is still bad and ugly. So we did this same kind of math across eight or so different large data compilations. We discarded anything that was too small to be, you know, we didn't want a single outlier to skew the overall numbers too much. And we also limited it to ones that we had cracked a substantial portion over 90%. Again, so that the conclusions we were able to draw were sort of statistically, you know, valid for the population, knowing whatever the caveats, you know, you had to take into account. This, this graphic is all of the histograms for each of those organizations not sorted by what's popular for them, but basically stacking on top of each other so that we can see clear patterns occur. This very first topology on the list, it was popular in all five of the organizations or all eight that are included in this data set. Same thing, you know, the number three one was pretty popular. The number five one was hugely popular across everybody. The only organization that was substantially different from all the others is the one that was grafted in cyan or light blue here, where they're a huge tall spike and nobody else is a huge spike in that area. This is a big enough anomaly we dug in to figure out why. And it's because that particular organization had a default password that was so commonly used and still used by all their users, by a huge percentage of their users, that it threw off all the rest of the numbers. And it was a specific outlier just for them. I don't even remember what it was, but imagine it was, you know, one change me, and it was a number one and then all lowercase. And even when people changed their password away from one change me, they made it to change me or three change it. And so their topology was exactly the same as the original password was. Once you once you exclude that one data point, we see there's a huge commonality in which topologies are super common across industries. And these are the companies in different sectors. They are, for the most part, US centric. Some of them are global, but the majority of their user population was still English speakers. So that is one caveat. It's very possible that somebody with alternate language sets that users gravitate towards would look somewhat different. But if you knew nothing at all about a company other than that it had a substantial presence in the United States, take the top five common topologies on this list, attack those, and you're going to get tons of users. And as a defender, this is terrible because even in the strength of what we think of as good password strength policies, we know our users are going to land this way and attackers are going to come after them and hunt them down and kill them. So just to recap the kind of things that all this data crunching told us to confirm our suspicion about the problem and where we ought to try to find a way to improve the situation. Users pick the lowest competent nominator. There's specific things that they commonly do when told that the password policy is getting stronger. And those behaviors, although there are absolutely smart ways to attack an organization specific to that organization, go after word lists for their industry, go after proper names of things in their hometown and sports teams and location names and blah, blah, blah. Still, these trends are going to be common no matter who and where the company is. And complexity rules just don't help. Not nearly as much as we think they do anyway. By the way, another way to think about this in the COVID-19 era is users are not social distancing their passwords. To graphically depict what having 12% of your user population landing in a single topology means, if you were to imagine the land area of the, all right, I should be back. Sorry about that. If we were to imagine the entire land area of the United States as being the possible password key space that users could pick from, 13% of all users choose to live together in a land area smaller than Manhattan. And what's worse is when you tell users to change their password, they don't go far. They just move a couple doors down or maybe one block away. So any attacker knows they don't have to go far to hunt them down. Another thing about this is it applies even to much stronger password policies out of the box. We discussed this with some of our friends that work in places where, and this was going back to 2010, they had rules like you have to use a 15-character password and it has to have minimum two of everything. And they were like, so how strong is that really? And we said, well, it's really strong if you're not a smart attacker. But if you were a smart attacker and use, among other things, mask-based attacks using, targeting the most common topologies, you would have a painfully, you know, you would have a surprisingly good success rate. If you were looking at a large organization with lots of users and password histories, you would still likely be, and you picked the topology that had the most users in it, you would likely still succeed in cracking a password roughly once every 12 hours, you know, for however many millions of years. But you don't need more than a few to get you started. All right. So that's kind of the foundational concept that led to what I'm going to talk about next. So, you know, it became clear we need new defenses, we need new ways to make things harder. The rules that we have now aren't terrible, but they're not enough. And, you know, one of the worst things we need, one of the strongest things we need to do is figure out how to keep users from all congregating in the same place, no matter what rules we impose on them. Human nature says the majority of those users are going to cluster in a few new places, which can still be predicted or discovered by the attacker. So, at a high level, what are some things we could do? We could blacklist topologies that we know are a problem, where we know or predict users are going to land. And not just blacklist individual passwords, but blacklist the topologies, the shapes of those passwords. Nobody is ever allowed again to have a password that's an uppercase letter followed by a bunch of lowercase, followed by a couple of numbers, followed by a special. Just can't happen. Another thing we can do is require a minimum change distance between where your current password is and where the new one is that you want to go to. So, whatever words you're using, if you're using a three-letter word and a four-letter word with the first letters capitalized, the second letters replaced with numbers and a special character in between, you can't use that exact same pattern on your next password change. And also, don't allow your users to congregate on whatever topology they choose to congregate on. This is a term that we didn't know when we were thinking about this early, but which is becoming one of the ways to describe this part is dynamic password strength enforcement, meaning the password strength or the password policies you enforce are adaptive based on what your user population is doing. There's two big costs to this, which are key space reduction and user rebellion. I'll talk more about those in a bit. So, first, what do I mean by blacklisting? It's not that complicated. We figure out what the most popular topologies are usually for a given policy. Like, if your policy is 10 characters or more, it doesn't mean it's not interesting to figure out what the most popular eight and nine character topologies are. And similarly, if you require four of four, then we don't care about blacklisting topologies that are three of four or two of four. By the way, whatever list you come up with as the best ones to blacklist are also the best ones to start with when you're a cracker, you're a Patrick cracker, and nobody's enforcing pathway yet. So, we published what was our working set at the time back in 2014. We and a bunch of others can probably contribute updated data by now. But we also, with just about everything we figure out and publish, we figure out, we assume bad actors have already figured these things out before we started. That's one of the reasons that CoreLogic started the password, the crack if you can contest back in 2010, because we figured bad actors were already figuring out and sharing, you know, password cracking tricks. And we wanted to bring more of that discussion out into the open for defenders and researchers to be aware of too. So, what is the effectiveness of blacklisting? Well, if the attackers have figured follow these exact techniques and they go after, they use a top end approach of targeting the topologies that are most likely to be successful, then when we take those top ones away, they're going to start, they're going to get zero cracks in their earlier, in their initial pass. Instead of getting 25% of the organization in minutes, they're going to get zero in the first hours and wonder what's up. Of course, once attackers figure out what we're doing, and once they're able to figure out what users do next, then maybe we're just pushing the problem around. You know, we blacklist these 100 most popular topologies, and now users are going to congregate in the 100 different ones, and it's just chicken in the egg, or not chicken in the egg, but it's an arms race, we just, you know, go back and forth. Maybe. The next thing we can do, which is starting to get more adaptive, is a minimum topology change at password change time. So without any kind of enforcement of topology change, a user who just increments a number in their password is a perfectly accepted new password. And similarly, even once we're doing it, even if we enforce a thing that says, hey, you can't use the same topology that you did in your previous password, they're probably going to make the smallest change that constitutes a topology change, such as pick a random letter that was uppercase and flip it to lower, or pick a lower and flip it to upper, that would pass the test. But it would still be lower down on an attacker's list than incrementing a number or possibly incrementing a letter. And more importantly, if we say, well, let's figure out how can we measure the difference, the distance between one password and another, and then what's a minimum distance that we want to enforce? So one of our people who is, well, way more educated than I am in the computer science realm, said, oh, that's leverage science distance. And I said, what's that? It was what I wanted without knowing it. It's a science, it's a way to measure the distance between two strings, how many edits, how many inserts or updates or removals or increments or whatever. So just for string changes, you measure the leverage time distance between two strings by looking at how many characters were changed, added, removed. For topologies, we can do much the same thing. So when we look at these couple of hypothetical password changes, if you keep all of the character types the same and you just increment one of them or decrement, your topologies leverage time distance changes zero. But if you change the character class of one of the positions, then you've successfully moved your topology from one to another. And obviously, if you combine more than one thing at a time, you may or may not, your edit distance of the string and your leverage time distance for your topology may be different. And if you go about making multiple changes at once, then it's going to be, it's not going to be the very next thing the attacker tries. When they learned that your password used to be password 20 exclamation point, passwords 20 exclamation point won't be the first thing they try. It might be the 10th. The, you know, more interesting if you change more positions throughout the string. And every time you make a topology change, you multiply out the number of different paths the attacker has to go down in order to try to find you. Okay, so this still isn't really ware leveling. Ware leveling is, you know, comes from, at least I borrowed the term from the way things like solid state disks, SSDs, try to spread their writes out across all of the different memory cells before they come back to one that was previously written to and write to it again. So in this case, we want to take all these users who've bunched up in specific buckets of a specific topology and spread them out. The most ideally ware leveled population of user passwords would have on average, you know, one or less users per bucket or at least no more users in any particular bucket than any other. That way, if an attacker tries to pick any random topology, they're, or rather any topology that they know humans are likely to gravitate towards, they'll get few if any cracks instead of, oh, 5% of your users landed on there on that topology. So we have to think about what, what are the impacts to an attacker of using ware leveling effectively and uniformly. Now, there are downsides to doing it too, and I'll get to those in a second. But if the attacker is used to being able to target that first or the first top five topologies and get, you know, 10% of users per topology, 50% of your entire population in just, you know, 10 topologies or less, if users were spread completely out, then you would end up with something like six orders of magnitude more work to get the same number of cracks or six orders of magnitude fewer cracks using the same amount of guessing time. Now, in reality, it's not going to be that good. And it could be even worse. Suppose the attacker knew somehow exactly which topologies were in use by within your organization, then they could target just those topologies and ignore all the ones that aren't in use. However, because we know that their success rate in any given topology is now minimized, it's controlled, there's not going to be more than one or more than the number of topologies divided by your number of users. In that particular topology, their success rate still drops by two to three orders of magnitude. In the realistic case, where users spread out, but they're not, it's not like you're rigidly assigning them a topology, you may as well assign them a password if that was the case. And that's going to be terrible and everyone will kill you. But in the realistic case, you're probably looking at a four to five orders of magnitude change in work factor. And again, by that, I mean, it surprisingly to us, it works out more or less reflexively. If you say, I used to spend this many hours and crack this many thousand passwords, I want to spend that same number of hours and see how many passwords I crack. You will crack one 10,000th or so of the passwords you used to do in the same amount of time. And at the same by the same token, if you say, I used to crack this many thousands of passwords, I want to crack that same percentage or that same number, no matter how long it takes, it's going to take you 10,000 to 100,000 times as long as it used to. So what are some costs here? The one that first comes to sort of mathematical minds is I'm blacklisting parts of the key space. That means I'm making it so that there's fewer possible password users could choose. Oh my God, we're reducing our key space. That helps the attacker because now they have less ground to cover. But once you do the math, that actually almost disappears into nothing. So for instance, if you are talking about eight character password length and in reality, there's almost no world and no hash in which eight character passwords are okay anymore. But keep the math simple. Start with eight characters. We're going to have 48 possible different topologies of 64k, 65,000. If we blacklist the 100 most popular topologies, we still, that's less than, that's only around 0.2% of the possible key space. And in fact, if you are talking about nine character, 10 character, 11 character passwords, the percentage of your key space that you're deleting by blacklisting is even smaller if you're just talking about the top 100 out of a million instead of out of 65,000. Now, the thing about forcing unique topology use, or at least spreading uniformly among the topologies that are not blacklisted, you actually do have a problem where any given randomly selected topology is now more likely to have a password in it than before you impose this rule. But if you think about the fact that what we're getting rid of by doing that is we're getting rid of the property where an attacker can find one topology that has 5% to 10% of your users. Even if it's only that they can find a topology that has 1% of your users, then the difference is still massively in our favor that this is an improvement for the defender. Now, there's another possible downside. As Perz said, you're going to have violence against your security staff. Well, maybe. On the one hand, any new control that adds work for users is going to be resisted one way or the other. One of the ways, one of the things that you could do about this is measure it, like do pilot testing. Carnegie Mellon actually did a usability study on the difficulty of different stronger password strength enforcement. And their results were, yes, it matters, but it's not as bad as you might think. That's the PLDR of it. And there are specific ways you can address it with user hinting. Now, what are ways you could hint the user, hey, your password isn't strong enough. If you adjust it this way, it would become strong enough. Or literally, your password is too weak, make it this instead. Those are decisions that, we can work our way through and different organizations can choose differently. But it appears from CMU's research that some flavor of that can drastically solve the user complaints. You're going to have cranky users, but you're not going to have 100% of your users get cranky. And, yeah. Now, we always have to remember that if we're going to give extra information to users at password change time, that might somehow become user hints to the attacker. Some of those things we've thought through and figured out ways to address others are just going to be trade-offs we're going to have to know that we're making. And keep it in mind when an organization chooses, for instance, what kind of hint level to use, they're going to have to keep that in mind. All right. So enough of that, the code. So a few years back now, we released a PAM library, a PAM module that implements all this stuff with optional, you know, controls to enable it and set the different parameters. We developed and tested it on multiple Linux's, Gen2, Ubuntu, probably some of the Red Hats. And I believe Solaris, but don't quote me on that. And it basically implements all the things that I just talked about. And I'll talk a little more about specifically how in a second. We did patent the ideas here, but the code is AGPL. You can use it for free all you want. The TLDR of the way the AGPL works is you can use this all you want for free stuff and open-source stuff. If you want to make money off of it or make custom modifications that you make money off, then you have to talk to us because the AGPL license won't work. You'll have to ask us for a dual license on something other. But that's fine. We did it as a PAM library or a PAM module for, you know, Linux deployment, but with the thought that the API in the library could be used by other things too, like an LDAP server that implements single sign-on or what have you. Yes. So Perth asks, Perth is thinking about how attackers would adapt. I actually have that in here in a little bit too. Okay. So what are the different modes that we implemented in the Pathwell open-source code? So audit mode is simply the tracking of topologies that are used as users change their passwords. When you start in an organization, you don't know other than by cracking their existing password base, which is an option, especially if you're us, but it might not be an option for everybody. So you could enable this to record topology use as it goes. Now, if you're going to keep a database of all your users' topologies, you better consider that database pretty sensitive and be careful with it. And because of implementation details and our proof of concept, it's currently limited to tracking the topologies of characters up to 29 characters long, which ought to be enough for anybody. Then there's enforcement modes, which actually impose different controls. The first and most obvious one I'm going to talk about is blacklisting. And it's basically everything I said before that blacklisting should do, it does. We distribute a standard list, a starter sort of list of topologies to blacklist, but you could also add or modify your own. And again, I'll reiterate, blacklisting itself is not enough as a permanent solution. But it does help you be better and more resistant than other organizations that somebody might attack. So you don't have to run faster than the bear, you just have to run faster than the other guy. The next enforcement option is MinLev, that's Levenstein Distance, the minimum distance for the topology of the new password candidate versus what the previous one was. This enforcement mode, actually, I should mention, neither blacklist mode nor MinLev mode require auditing to be turned on. So if you have heartburn about the idea of maintaining a database of in-use topologies and I don't blame you, then these features don't need that at all, because they only care about what your new password is in the case of blacklist or what your new password is compared to your old password in the case of MinLev. They don't care about what your new password is compared to the entire rest of your organization, which is what MaxUse is about. MaxUse allows you to say, I want to have a threshold, any given topology bucket can only have one user or can only have five users or whatever's the right number for the size of your organization. And again, the point here is, if the attacker, if you have successfully kept users from clumping anywhere, or rather if you want to prevent users from clumping anywhere, you use this and then a user setting their password can't set their password to be the same topology that they or anybody else in your organization has used, or if not more than two have used or whatever your setting is. Now, it's called an enforcement, well, it's an option that matters in enforcement mode, but it's not an enforcement but rather a sort of the, you know, controls the user experience if you want to give any hints. So we did an experimental hint info level implementation partly to facilitate that kind of research that I talked about before, the user acceptance of what's the user experience when I tell them this is much information as opposed to when I tell them that. So out of the box, Pathwell doesn't give users any particular feedback or information, doesn't tell them how they could make their password stronger, doesn't leak information about the organization, doesn't give information to the screen that would be useful to somebody's shoulder surfing. But if you choose to, if it's right for organization, you can turn up hint levels. And this is supported by the back end API as well, not just by the PAM library. So if you wanted to plug this stuff into a non PAM using single sign on server, you know, go for it. Right now, hints are only hooked up for blacklist violations, but the other types should be doable without a problem too. Now, not going to do a live demo, but I'm going to show some examples of how this works in practice on a Linux box. So you install the PAM library, and you modify your PAM.d settings. And we have readmes and examples for the different distributions that we supported and tested it on, but you can also roll your own custom, you know, PAM settings. So you do a thing to turn on audit mode. You do a thing to turn on blacklist or minlev or max use or any combination thereof. And you can also enable the hint level. So again, max use requires that audit be turned on. But the other enforcement modes don't require audit to be enabled. So what's it look like when things happen? Well, because this is sort of, you know, because this is a beta, we're verbose, although we don't include secrets in the output, but we're verbose even on successful, even in the case of success, we log a bunch of info about the fact that we just accepted a password change. A failure will tell the user that it failed a specific check, the minlev check in this case, but it won't tell them anything else. It won't tell them use this one instead. It won't say there's already one user and minlev is set to, or sorry, it won't tell them that, you know, how to modify their candidate. It'll just tell them, you know, they failed. And we log that. And then a max use failure, where in this case, minlev was not turned on. So users were allowed to choose any new password they wanted, any new topology they wanted, as long as it wasn't in use by somebody. Well, it was in use by them. Therefore, they can't reuse the same one. But they also can't reuse the same topology as their neighbor. And we log that. But again, we don't give away anything else. Now, showing a little bit of what some of the hint level stuff you can, if you want, look at the topology of the password that they supplied, and say, that's a topology that we're not going to allow. Let's figure out what change would be viable. And what change would be viable is basically randomly selected at password change time by the engine. It says, okay, this topology they asked for is not allowed. What are the neighbors of this topology that are allowed? I'm going to pick one of those at random, and then suggest an edit that would land the user in the new allowed topology. Now, funny enough, what it's suggesting here is actually a topology that wouldn't be great on its own. But for instance, maybe the organization's minimum password length is such that by making this longer, they'll end up being an outlier that way, because most users won't make their password any longer than they need to. You jack up the hint level one more, and now we'll actually show them where drawing on their existing password here, point at the spot in their existing password and say, put a this here, put a that there. And then even more, we can suggest specific things. You can say here, try replacing this with a that, and try inserting a this here. And your password is going to end up as this. You know, classically, and back in the day, nobody ever suggests echoing plain text back to the human. But it turned, you know, there are actually use cases and scenarios where that's reasonable, or at least where it's reasonable to let the user choose. That's why a lot of, you know, web apps these days have a little button you can click or hold to show to reveal, rather than always make it opaque. And it's always just a trade off. So if it's right for your organization, it's right for organization. I'm not going to judge you until I come and test it and own you and then I'll judge you. So what's next for the Pathwell project? Well, like I said, hints are only implemented for one of the modes. So easily, if easy to say, we should just go ahead and add hint support for the other modes. More platforms. The working implementation, like I said, is Linux PAM. But the bigger organizations, places where this is going to be more useful, are going to be running either AD or some other kind of large single sign on platform. If a vendor of one of those platforms wants to work on making their product better than all other products in the universe, come talk to us. The other thing too is we can easily come up with more ways we could improve things, more enforcement options. So first of all, this is highly focused on one specific very successful, but it's still only one specific aspect of password cracking, which is the mask attack. So are there specific things we could learn from other highly successful password crack methodologies, which we could in turn turn into defensive enforcement, dynamic strength enforcement at password change time. Say, well, I know that that thing you just asked for is going to fall victim to this specific attack. We can compensate for that attack pretty easily and not allow you to make that requested password change. Now, an easier, more straightforward thing would be, hey, we could easily do regular expression support. Right now, our black lists are basically just the masks tokenized into a machine format, but basically right now, we just have a list of masks. We could enforce of regexes too. So you can make a regex that would disallow a huge variety of different ways to say the word DEF CON in your password all at once and do that with, you know, do that with whatever your company name is. Exactly the word list that you find is the top most common word in your organization's passwords because you do regular password audits because you're a smart customer of ours. You could say, okay, well, my company name and my city name and my sports team name, all of those and every variation that fits into, you know, would match these regexes among disallow. And now you've taken away not one black listed password or 10 black listed passwords, but millions of terrible passwords all at once. And then if this did get adopted, what would attackers do next? And what would we need to do to adapt to that? So that's pretty much it. I'll take questions and I'll first I'll scroll back to look for questions, but then I'll take more and I'll be around and in discord the rest of the event.