 Good afternoon. I'm here to present the talk on we the government are here to help. What we're gonna do is talk about the FIPS 140 standard and take a look at take a look at it. So the agenda here as you can see I want to give you a little bit of who I am then we'll talk about the background of the standard get into FIPS 140. I'll go through the validation process and then we'll look at some of the requirements. It's kind of hard to talk about the the good and the bad of the standard without actually knowing what's in there. We'll also take a look at some of the future kind of give you an idea of what's what's to come. I've also allotted a little bit of time hopefully for some Q&A in a year if necessary. There's also the Q&A room after the talk as well. So who am I? I am Joey Muresca. Some people may know me as lost knowledge as my handle on Twitter and what I go by a couple other places. I work directly with FIPS 140. So I have experience in the area for the last five years I've been doing FIPS validations. I presently run a FIPS validation lab and I have seen hundreds of validations in that in that time period. So I've seen a lot of different products and seen how a lot of different vendors implement their requirements. Outside of work I have other interests. I do programming. I'm a lock picker. I'm just a general security enthusiast. I think it's you know I think it is one of the areas that gets overlooked a lot by some people because it's hard to convince people that they need security until it's usually too late. One disclaimer and I added this at the last second because I realized there are a few people here who probably know who my employer is. So just the standard disclaimer that any views and opinions in the talk are my own not that of my employer or anybody else for that matter. So that question is why am I here? So I wanted to shine a new light on security standards particularly some of the government standards. I hear a lot of people who talk about negatively about the standards. They complain about the processes. They complain about the requirements and having to implement them without having a lot of knowledge about what's actually within those standards and what actually goes into them. So the hope was that you know to give a new look at these standards you know the important thing to remember is that no standard is going to protect against everything. You know they standards become dated because they all take a long time to develop. Their enforcement is always going to vary and so that has a great impact on the security as well. The last thing is that they do provide sometimes they do provide a false sense of security and that can also be a negative issue you know that no matter how much I talk about it is always going to be something that's going to exist. So what is the FIPS 140 standard? It is the federal information processing standard number 140. This defines the requirements for cryptographic systems that are to be used within the government for protection of sensitive information. The program is actually managed by both NIST here in the U.S. as well as the Canadian government and through their CSEC which is the Computer Security Establishment of Canada. There are two groups at NIST that actually handle the management of the program, the CMVP and the Cryptographic Modular Validation Program and the CAVP, the Cryptographic Algorithm Validation Program. And they're the group who are responsible for the kind of for all the algorithm testing that's required as part of the FIPS process. The last kind of thing is as I mentioned it is you know largely a federal government standard, though it does have acceptance in other areas. We've seen cases of other standards bodies who have required FIPS validations as a pre-wack, pre-wack, pre-wack, yeah. Yes, I can't speak this morning. Yes. Well I need a world drink so if anybody has them. So the past, present and future of FIPS. So the original version of the standard was 140-1. It was originally published in 1994 and was eventually replaced by FIPS 140-2. The currently is a development for a new revision of the standard 140-3. It has been in draft since 2005. You know, I always tell people you know it's going to be coming, we're going to be seeing it eventually. I've been saying that since I started doing this work. So how does the process itself actually work for validation? So there's actually three main parties involved in the validation process. There are product vendors. Somebody has to create the product to be sold that is going to be validated. There's the accredited labs. There are over 15 at present. I think the actual number is maybe 18 or 19. And then there's the CMVP. The ones who are responsible for issuing the actual validation certificates. What's important to remember is that the government doesn't do much beyond review reports from the labs. So there's a lot of variance in testing because each lab, even though they're following the standard, they may have different methods of actually performing testing to verify requirements. So we're going to start taking a look into the actual standard itself. There are kind of three key components to FIPS 140. There's the standard itself, which is kind of the core document and what was originally developed. And there are two kind of ancillary documents that they've created. The derived test requirements. The derived test requirements and the implementation guidance. Do I even want to know what it is? Yeah. I'll wake you up in the morning. So the requirements are actually divided up into 11 sections. And I'll take a look at each of those a little bit later on. And there are four increasing levels of security. And so you definitely see within those four different levels, you will see kind of the difference. You know, a level one validation from a security perspective isn't gaining you a lot. All the documents, all three of these documents are available on the CMVP web page. I've included a link in the slides. I do not know for sure that the slides made it into the DVD or not. But I will post them on my web page after DEF CON. So they'll be there even if they're not on the DVD. So the FIPS 140-2 standard. As I mentioned, this is kind of the original, the core document from which the other two documents are derived. This defines all the requirements and it also defines all the terminology. One thing that you'll learn not with just FIPS but with most standards is that they have, they like to come up their own words. And so it can be a little bit of a pain sometimes to correlate that terminology to the words that we all use, you know, normally. Now the document can be vague. This has been one of the issues I think with implementations is that the standard will read one way and then it's up to an individual to actually interpret that, you know, requirement. The derived test requirements is a much longer document and actually the one that kind of breaks out what needs to be tested by both the labs as well as what needs to be provided by vendors. So it's organized kind of as I've shown here using assertions which are direct quotes out of the standard and then underlying vendor evidence items or what you'll hear me refer to as VEs or test evidence items that are referred to as TEs. And these two items are usually counterparts to each other. For a vendor evidence item, there's usually a matching test evidence for that we have to perform to verify it. So the implementation guidance, this is the smallest of the documents and it actually is intended to clarify the requirements in the other two documents. It's said that it's not supposed to introduce new requirements. However, that's not really always the case. There are times where the implementation guidance can be said to extend or even introduce new requirements that aren't currently existing in the standard. But it does tie back into both the standard and the derived test requirements. So as you see on this slide here, I've tried to provide a mapping and I hope it comes out pretty well. But as you can see in the top, the top portion is taken from the standard. I've tried to underline the I've tried to underline the quote from the standard. And as you can see that same quote appears in the assertion in the line below. The middle picture is taken from the derived test requirements. On the bottom is an implementation guidance. And again, you can see the relevant versioning and information that is provided to identify what assertions, what sections of the standard that these requirements apply to. So as I mentioned, there were 11 areas of security. They're all outlined here. And for those, I've actually kind of taken out the ones that are largely documentation. As you can see, there are four sections there that are mostly for the most part documentation. Most of the requirements simply boil down to verifying that something is being done properly. For the other sections, there are actual requirements that need to be tested and that aren't really covered anywhere else within the standard besides in those sections. So I'll just kind of briefly go through each of these. And there's some more detail in each of them as we go through. But the cryptographic module specification is actually the section that kind of defines what the product is, what's being tested. The cryptographic ports and interfaces is a very kind of black box look at a validation module. The role service and authentication is pretty straightforward. It's, you know, who can use the module? How do they authenticate? And what can they do? Physical security is just as it sounds. The operational environment is operating system requirements. Those are really only applicable when validating software. Cryptographic key management deals with key life cycles from the time they're generated to the time they're destroyed. And then the self-test section, which is probably one of these sections that is missing from most modules that haven't been FIPS validated because they're very FIPS specific requirements for self-test implementations. So as I mentioned, the cryptographic module specification defines the validating module. The biggest weakness in this section is at lower levels where the configuration and use of the module in an approved manner is dependent on user configuration. So for low level validations at level one and two, it's up to the operator to ensure they're not using weak algorithms that might be detrimental to their security. At higher levels, this gets a little stronger because at levels three and four, it's all enforced by a module. So at level three and four, the module or the product has to actually be, has to actually have a switch essentially. That it either always has to work in a compliant manner or it has to use a, it has to have a switch that basically turns on the FIPS mode of operation, thereby kind of preventing users from misconfiguring the device. As I mentioned, the cryptographic module ports interfaces is a black box kind of testing. It defines the requirements for the types of data that flow. At level one and two, there's really, the only requirements differences are that at levels one and two, physical data and there's no need to have physical or logical separation of critical data. So essentially keys, passwords and other kind of sensitive information can go through the same physical reports as normal data. At level three and four, there either has to be some physical or logical segmentation of this data. And also, if there's plain text data, then you either have to have what they call a trusted path or a directly attached cable. The trusted path is a little vague. You know, it creates and actually has created a bit of a catch 22. One thing that was originally viewed as a trusted path was a mutually authenticated TLS session. Well, the problem with that was that now your data is encrypted, so you no longer actually are sending stuff in plain text. So you had a trusted path, but you also didn't have to meet that requirement. As I mentioned, section three, as rule service is an authentication, like I said, the name pretty much says it all. At level one, there is no authentication requirement. So basically, there's no need for an operator to authenticate. This is probably the most common level for software validations, because for the most part, software applications and particularly software libraries don't implement any sort of authentication. At level two, there's what's referred to as role-based authentication. And so the downside to this obvious are there's no real accountability. You basically have a set of authentication roles, and 20 people may all have that login information. So you don't particularly know who's actually performing the, who's actually the one logging in, who's actually doing the changes. And the other thing is that, obviously, password links can be enforced by policy. So, again, it's up to the operator to ensure that they're using properly strong passwords. At level three and four, it gets better. Again, this is kind of a common theme of the standard, for the most part, is that higher levels of security gain you a little bit more security. So they have identity-based authentication at these higher levels that includes some user accountability, because now each user is uniquely identifiable. You know, what roles they can assume is based on, you know, credentialing. And so you would have individual username, password combinations, or individuals would have their own unique certificates. So each person would be uniquely identified when accessing the module. So the password requirements are probably some of the, are relatively weak. And I think this is actually a result of age, more than anything else. As I mentioned, the standard is almost, is actually 10 years old now. So its strength requirements are, you know, that there's a one in a million chance of an attacker being able to brute force the authentication information. Well, a simple four character alphanumeric password is enough to meet this requirement. So obviously nobody's going to think a four character password is secure. I mean, even a six, six digit pin is enough to meet this requirement. So it's, it's a very weak requirement in that sense. There are no restriction on the types of passwords. So again, you know, you know, dictionary words, you know, for, you know, you can name all your colorful four letter words that could technically be legitimate passwords to meet the one in a thousand, one in a million chance. The one in a hundred thousand chance is based on multiple successes within a minute. So, you know, in addition to being able to prevent a single brute force attack, you have to find a way to limit people from just pounding away until they can successfully get a password that matches. So this is normally, um, enforced via lockouts. It's the most common method. Um, this kind of ignores long term attacks though. Um, so, but again, I think one of the reasons that they've focused on a single minute is that the goal, the idea is that if, you know, for a long term attack, if somebody has your hardware or has that much access to your hardware, you're pretty much SOL anyway. Uh, the future might be a little better. Um, and that's, that all depends on what the final version of 140-3 looks like. Um, that's something that, um, we can discuss. Um, I don't know if we'll have time to get through it today, but if, um, if you have questions about that, we can, you know, look into that a little deeper. So as I mentioned, physical security. Um, this is not really applicable to software modules. Uh, again, because they're all reliant on computers. Uh, the requirements are actually broken down not only by level, but also by the type of product that's being validated. So single chip modules or single chip products are, um, you know, like integrated circuits or smart cards are, have a different set of requirements than multi-chip standalone or multi-chip embedded devices. Um, where, you know, you might have routers and switches or, um, hardware security modules, um, you know, PCI, crypto accelerators and the like. At level one, there essentially is no physical security. The requirement essentially is that it be a production grade component, be made of production grade components. Essentially we're just, uh, the testing is limited to ensuring that, uh, that it's not just a bunch of, uh, components that have been soldered together and, you know, in somebody's garage and it's actually been, you know, commercially produced. Uh, level two, uh, they moved to opacity and tamper evidence. Um, so this level, what they're looking for is that, you know, you can't view the internals of the module and that if somebody does open up the enclosure, that you actually can tell that they've done so. Uh, level three is tamper response. So in this case, if you have an enclosure that's actually removable, they're looking for, uh, a response to be taken, a response to be taken when, when somebody opens the enclosure. Uh, and in that case it's zeroization of the, of the keys within the module. At level four they have tamper detection and, uh, level four is very rare because, um, in part because of the level four requirements. Uh, the tamper detection essentially means that you have an envelope around the product that any attempt to puncture, drill, mill, the enclosure would actually result in the zeroization of the keys within the module. Uh, thereby securely destroying any sort of, you know, plain text keys that might be used by that module. So what is opacity? Um, it's actually a very subjective requirement of the standard. Um, and so that's part of the reason why I wanted to take a little bit deeper look at it. Uh, the ventilation, uh, can be a little bit tricky especially for networking modules which are the most common level two validations. Um, and the interpretation has kind of changed over time and I'll show you an example of that in the coming slides. Um, previously in order to actually fail this requirement you had to be able to make out manufacturer information and part number information off of the integrated circuits. Um, and the components within the module. Uh, no longer is that really required. Uh, now they will even take outlines of components, um, or, um, you know, uh, silhouettes as being sufficient to fail this requirement. So it is very subjective, um, and it comes down to, in part, the government, uh, reviewer as well as the, uh, testing laboratory. So in this example here, uh, I've shown an example of a, of an obviously failing case and I don't know. And it actually does show up, uh, pretty well. So as you can see there, um, the one circuit that you can make out and it's kind of blurry. Um, but, uh, it's actually an Intel chip and, um, and it was actually a surprisingly good picture considering I had to use my cell phone to take it. Um, so, uh, you know, typically we have a, we would use a better camera than a cell phone camera to actually capture these images. And obviously what you can usually make out with the human eye is a little better than what you can get with a lens because we can't really poke it through those grates that you can, uh, clearly see, uh, inhibiting the, um, inhibiting the camera. And so in this case, you have an obviously failing case, both under the old methods and the new methods. And this example, we have a case that's a little bit more vague. Uh, it's the same device but taken at a different angle and from a different location. Uh, so in this case, really the only integrated circuit you can make out is this, you know, uh, side of a, of an integrated circuit here. You can't really tell what it is. Um, actually even when using, just holding it up and actually using the human eye, it's very difficult to tell or make out any parts or information from that, uh, from that chip. Now under the old system, that would be, that would be acceptable. Under the new one, it's a little more vague. Um, and typically this would be considered a fail, um, because, because you can see the component even if you can't necessarily make out what it is. Um, one of the big complaints about this requirement obviously is that what does that really gain you? And you know, uh, if an attacker really wanted to know what's inside the product, they can go buy most of these. Um, pretty much every hardware device that's been validated is, you know, the same off the shelf, um, item that vendors, uh, that vendors sell. Uh, this last one that looks like a giant gray box is the top of the case. Uh, this is a pretty apparent pass. Uh, if you can see through that, then you have X-ray vision and superpowers are good for you. Um, so obviously in this case, there is no opacity concern. So, uh, tamper evidence in response, uh, level two, it's an apparent, it just needs to be apparent that an attacker has, you know, attempted to compromise the system. Um, there's a little bit of weakness on this because one, um, there's limited testing that can be performed by the labs. In particular, they're not allowed to add new materials. Now, obviously if labs could sit there and spend 10 hours and take off a label and kind of paint it back on, that would be a problem. That, that, that might be a little excessive for testing. Um, particularly for what they're trying to capture at level two. Um, however, you know, there's some, there's some modules that, uh, honestly could probably be failed with nothing but a Sharpie and filling, filling in a little bit of, you know, space with a matching color. So, it's a little bit, you know, since you aren't allowed to do that as part of testing, it's a little bit kind of disingenuous to think that, that you're actually gaining much from the tamper evidence. Um, obviously sometimes the labels fail so magnificently that, you know, it's not an issue. Uh, level three, um, as I mentioned, you have to respond to tamper events. When a door or cover is removed, you have to actually, um, you actually have to zeroize all the keys. Um, essentially destruction of the keys. Zeroization is one of those nice terms that they decided to invent. Um, it's all about key destruction. So, for the operational environment, uh, at level one, a single user mode is defined, and this is actually a definition that's changed over time. Um, again, about four or five years ago, the definition used to be that you had to, in order to be validated as a software module level one, you had to be tested and validated on a system that could only log in a single, um, person at a time. Um, they went so far as to actually provide guidance to the individual labs on how to configure a UNIX based system to operate in a single user mode. Um, it's a little ridiculous because it ignores the entire, you know, server client architecture and the fact that, you know, most servers are going to have more than a single user at one time. Um, the, this definition has changed a little bit. They've actually kind of, you know, vaguely hand-waved it away by saying that, you know, a single user mode means that, you know, only a single instance of a software module can be accessed by a single user. You know, using the concept, you know, under the concept or thought process that, well, when you load an application into memory, if different operators are loading that application, they have, they supposedly would have their own instances. Obviously, this has its own weaknesses, but it's kind of satisfied the server architecture requirements. And again, for level one, what you're trying to get from a validation is a little different than at levels two, three and four. At level two and higher, they have a requirement for a common criteria-evaluated operating system. Uh, and so in these cases, it actually limits the platforms that can be validated because you have to tie to specific validation platforms. Uh, and so for each common criteria validation for, for operating systems, there's actually a list of hardware platforms on which they can, um, on which the testing for that was performed. And so you have to match that as part of this and that greatly limits the, the usability and portability of some of the, um, level two software modules. And that's why you don't see a lot of level two software modules. And for some reason you don't see any three in four, level three or level four software modules. So cryptographic key management, this is actually probably one of the two sections of the standard that has the most requirements related to it. Uh, this deals with everything, um, uh, everything to do with the key life cycle. So this includes random number generation, the key generation itself, how keys are entered in output as well as their establishment. Uh, the key storage requirements, uh, which as I say here, they're mostly meaningless and key serialization. Uh, random number generation and key generation are both, um, required to use approved standards. So, um, for random number generation they've got a set of approved deterministic RNGs that are acceptable. Um, for symmetric keys obviously they just must use these approved RNGs. Um, asymmetric key generation methods actually have to follow, uh, the sets of approved standards. So for asymmetric keys, um, there's two, there's two or three standards that are acceptable, uh, for the methods and it varies based on which key generation, which asymmetric policy you're using. Um, for DSA, ECDSA, and RSA, uh, this methods are defined in FIPS 186-2 and 186-3. Um, additionally the, the key generation method for RSA keys and ANSIX931 is also acceptable. So key establishment and entry and output, um, these are really the only requirements that vary, um, based on kind of behaviors that the products don't necessarily control. So manual distribution are, uh, are considered largely impractical, I consider them largely impractical methods, but they're considered relatively secure. Essentially if I wanted to, uh, assign the same key to a series of devices I would walk around and punch in the same key that into each device, either using a key loader, um, you know, verbally telling it to somebody else, um, which you know, passing around hex characters is, that way it would be real fun. And, um, or, or using tokens, which is probably the most practical method of the manual distribution methods. Electronic distribution are keys over what they call unsecured media. And so in this case, this is both your local networks, your wide area networks, the internet, any time that you don't necessarily have, um, pure, you know, verification of what, uh, of the network, um, these are considered unsecured methods. And so in those cases they have to be encrypted. You have to use methods like TLS, SSH and Diffie-Hulman, um, in order to actually protect the, um, protect the key methods. So for key storage and zeroization, uh, as I mentioned, the key storage requirements are pretty weak. Um, there's no real requirements, uh, for the form of stored keys. So it's not required that you, um, encrypt keys, even at higher levels. Uh, the other requirements for storage are, at best, a little vague. Uh, they have what they call an association of a key and an entity. And so in this case, a lot of times it boils down to, well, a key is associated with the product that it's in. Or maybe it's tied to a specific user. Um, but there's no real, um, there's no real requirement on how that key is stored. Uh, the zeroization, which boils down to just the destruction of keys, um, involves actually overriding keys. And that's one thing, um, that's important a lot of times for software validations, um, in particular because most of the time they just do simple free operations, which may not necessarily be destroying the data. Um, so they actually require that they be overwritten with zeros, ones, or random data. Um, it needs to exist for all plain text keys, but it can be done procedurally. So there's no requirement necessarily for when it's performed or how it's done. Um, it, it doesn't need to be automatic for any reason at all except for a tamper response, uh, with physical modules at levels three and four. So the self tests, uh, the, there are two categories of self tests, uh, the power-up self tests, which are basically boiled down to, um, a series of known answer tests for approved algorithms and a firmware integrity test. The conditional tests are done for specific services. So they have a, uh, continuous random number generation test, um, which is designed to ensure that you aren't generating the same value over and over again. Uh, the funny thing is, um, that, you know, there's still some probability that you should generate, that you could potentially generate the same value twice. Well, you technically can't with a continuous RNG. Uh, there's the pair-wise consistency test, which is designed to check the validity of, uh, key pairs that are for asymmetric keys. Uh, the firmware load test, which is designed to, um, for when you actually have hardware modules that can load and upgrade their firmware. Um, bypass tests, um, which we'll go a little bit more into because it's a little bit more complicated. And the manual key entry tests, which is essentially to ensure integrity when you're, um, when you're doing manual key entry, which basically is very rare. Um, the only time you do manual key entry is if you're actually punching keys in through a, through a touchpad on the product itself. So as I mentioned, take a little bit more of a look at the bypass tests. Uh, the exclusive, uh, there are two types, an exclusive and an alternating bypass. Uh, the exclusive bypass is basically, um, basically having a switch where it's either encryption is either on or off. So you're either encrypting all data going through the product or you're, um, letting it all pass through without doing any processing. Um, an alternating bypass, you actually have, um, differing channels where some data might be, um, configured to go in plain text and other might be configured to go encrypted. Uh, the most common example would be with a router where you might have data going in multiple, um, directions. And some of that may be, um, through an IP sec tunnel. And so again, the configuration determines, uh, which of those are actually, um, being encrypted in which are plain texts. Uh, the bypass test has one common theme. Um, and, uh, basically it's not to accidentally pass plain text data. Um, there are a couple of steps to this process. They have a couple of requirements related to it. However, um, it's most commonly accomplished using a test fire. Um, essentially sending a date, sending a data packet and ensuring that, or sending a test packet, um, for network devices is usually through a loopback and verifying that it's being encrypted, um, that you're not just, you know, spitting out plain text data when you should be encrypting it. So what are the best and the worst of the requirements? Well, from, uh, from the good side, um, there's the enforcing of stronger algorithms. Um, you know, one thing that you would, that I was very surprised about is the number of people who actually, when you talk to them, still use, uh, algorithms like MD5 and DES to do data protection. And it's, and it's one of those things that the standard does do is that it does try to ensure that only, um, strong, um, algorithms are being used. The physical security at higher levels, um, is, is actually very good. Um, at level three and four, you're getting what you would expect for physical security. An operator at level, at level three and four, you know, you're going to have a module that, you know, is going to be destructible, that's not going to retain, um, information, you know, after it's been tampered with. Uh, the bypass test, um, while it can be a little cumbersome, I actually think it's a very good policy in and of itself, even without the standard, um, you know, you never want to actually accidentally be sending data in the clear that you don't want to be. So it's a very common sense test in that sense. And then the, in crypto electronic key entry, again, you know, passing data, passing keys and playing text over a wire is, you know, never a good idea. But again, you'll be surprised how many people do it. The worst, well, the limitations on physical security testing. Uh, as I mentioned, the limitations on what you can actually do from a, uh, from a physical, from what the labs can actually do from a testing perspective on the tamper evidence is, is a little bit, is, is bad because, you know, obviously it doesn't take into account, you know, the level of, um, the level of skill that an attacker would have. Uh, the limited zeroization requirements, uh, again, not having, not forcing, um, the, the module to actually perform zeroization at set times is, is not going to improve security at all. So again, that's one of those items that needs to, that's not really addressed well. And that's how to protect the keys, uh, you know, when somebody's gaining access to the product. Uh, the third is something I didn't jump too much into, but it's about how hardware-centric the standard is. If you've actually ever read the standard, or if you ever do decide to read it, you'll notice that a lot of the requirements are very hardware-centric, that they, you know, talk about certain requirements that when it comes to software, they just don't make sense. And there's no easy translation. And so this has actually created issues sometimes for software products because they're trying to enforce requirements that don't necessarily, you know, fit into the, the mold of the standard. Again, another item there is that the lack of protection for key storage, you know, while it's somewhat understandable at lower levels, I do believe that higher levels of security should come with at least some requirements that certain keys be protected not only when they're, not only when they're being distributed, but also while they're stored in the module. Uh, and obviously, you know, the, if you have plaintext keys within a product, then they're vulnerable to, to some level of access and somebody can get physical access to the system. Uh, the last requirement is that the standard is largely ignorant of side channel attacks. And again, this is a, this is something that's kind of largely, again, based on the age of the standard. Um, the standard is, like I said, 10 years old. And so it's, when they develop the standard, they created a documentation-based section that was kind of a, um, uh, you know, an approach to look at the requirements, uh, you know, and to allow vendors to say, oh yeah, we protect against side channel, but not necessarily require any testing of that. Um, that's something that they're going to potentially correct in 140 dash 3. Um, but to how many types of modules and to what products it will apply to is something that's still kind of up in the air. So as I've mentioned a couple of times, there is a newer vision of the standard that's being drafted. Um, you know, the, the newer vision, uh, has been in development for over seven years, almost seven years now. Um, and it's kind of, um, it's almost, you know, become a little bit of a, you know, wait and see. Um, you know, there are some new requirements, um, that exist in the current draft, um, particularly authentication requirements, um, that need to be enforced by modules themselves. So instead of telling a user you have to use an eight character password or 12 character password, those would be enforced, you know, within the product, it would, you know, actively reject small passwords. And hopefully, uh, early revisions included specific strength increases. Um, and I'm hopeful those will remain in, um, or that they'll be added in some way, shape or form. Uh, there's been side, as I mentioned, there's side channel testing requirements, um, particularly at higher levels. Um, and at a minimum these would be, at a minimum these would be, uh, for single chip, uh, modules. Uh, there's been talk of expanding it out, um, and optionally, I, and, or optionally allowing, uh, multi-chip modules to also test against these side channel methods. Uh, there's also the improved zeroization requirements. Um, there's limitations of presage, procedural zeroization. Um, so, you know, it's no longer just as easy as saying, oh yeah, well, when you want to zeroize a key, you can call this command, but it's up to the user to perform it. Um, you know, they've kind of increased the circumstances under which those types of zeroizations can be performed, um, to kind of prevent, um, overuse of procedural, uh, key destruction. So, when is the future coming? Well, that's, that's a good question. Uh, as I've mentioned, the standard has been in development for seven years. Um, currently the best guesses are that by the end of 2012 or early 2013, um, I've actually started taking a very, um, being the pessimist I am, very negative approach, which is that I'll believe it when I see it. Um, uh, what I'll say is that I think a large part of the reason for the time that it's taken to develop the standard is that they feel what they have is relatively good for what they're trying to do. Um, so they've kind of taken their time in developing the new version. Uh, the current public draft is actually rather dated. Um, it was released in December of, um, of 2009. Um, it was actually a Christmas gift. Um, uh, a very bad Christmas gift nonetheless. But it was a, um, that, that draft is a little dated, um, from what we've, from what I've had conversations with, with people at NIST, the, they have internal drafts that have made some changes in corrections based on, um, public provided comments. Um, but again, nobody knows for sure outside of NIST what's actually in that standard. Um, it will, it should be and will be an improvement over the FIPS 140 dash two standard. Um, but it's still by no means perfect. Um, if somebody came out with a perfect standard for security, then I guess we'd all be out of jobs. So to, to summarize, uh, the FIPS 140 dash two standard does provide some good requirements that can be improvements over baseline security. Um, you know, as I mentioned, um, there are a lot of products that, that I've had a chance to look at over the years where, um, simple changes would greatly improve the security of the product. Um, while it is a good first step, it obviously doesn't guarantee that you are any safer. Um, particularly since it's up to you a lot of times to make sure that you're implementing that properly. Um, I recommend, you know, incorporating some of the good into projects that you might be working on. Um, if you're doing, um, crypto and security, if you're doing cryptographic relevant, um, development, I recommend implementing some of those items. Um, for, for two reasons potentially, some of them, uh, the requirements are actually good. Um, they actually can improve security. And the other is you never know. Some day you, that product may actually want to be used by a federal government agency and you'll have to implement it anyway if they really want to procure their product. Um, I have included a couple of links in the slides. Um, the first two are to the NIST, um, pages for the cryptographic modular validation program and the cryptographic algorithm validation program. Um, I didn't speak a lot about the CAVP. Um, but one thing, uh, to note is that, um, if nothing else, they're good for testing, um, for testing your algorithm implementations. So if you're ever in doubt, if your implementation is sound, um, they actually have links, um, on their page to test vectors that can be used, um, for verifying, um, your implementations. Um, and you can always go to a validation lab if you want to get official algorithm validation certificates. Um, I included a link to the sub page on the CMVP's website that has the standards, um, the FIPS 140 dash two standard as it's currently drafted and used and a link to the NIST page for the development version of, uh, FIPS 140 dash three. As I mentioned, the public draft is a little dated, um, but it is the best, um, that's available right now for public consumption. So I do have, uh, a little bit of time left over, um, to answer any questions if you have any, um, and if, um, if there are any questions you have now, we can start going over those. If we run out of time, we can always move to the Q and A room. You can yell really loud. I'll repeat the question as long as I hear it. Right. So the question is about, um, open SSL, um, they do have a validation about, um, for what they call a FIPS object module. And, um, as you mentioned, it's not updated, um, as frequently as the main, um, open SSL path. What they've actually done is what a lot of labs considered a software based validation, which is kind of contrary to what, um, in the past everyone had deemed acceptable. Um, what they have done essentially is you can take the open SSL object module and build it, um, and then compile it into a version of open SSL. And the FIPS object module, so long as it's built according to the directions and their security policy and compiled into a version of open SSL or into your own application would be acceptable and would be FIPS compliant. And one of the key things to note, um, though, and something I will admit a little bit of ignorance on is whether or not the object module will work under both, um, the old 0.98 trains or if it'll work in the 1.0s. I do not know if it works with the newer open SSL or not. Um, I do believe they're going through a new validation, um, for a new version of the object module, and that may be for a version that's compliant to the new versions. Yes. So the question was basically about the requirements in 140-3 and about the use of, uh, of password, um, enforcement within the module. Um, and whether that's kind of, uh, a change from, you know, the industry standard of, you know, requiring those through policy or actually as part of the product. Um, they're actually are trying to push away from, from policy driven items like that, um, in part because, you know, if you have policies where it's, where it's enforced by a user, excuse me, where it's enforced by the user, that a lot of times those aren't being implemented, um, or they're not creating strong enough passwords. So the thought process is being that, you know, if you had a, um, you know, if you had a network appliance with a, um, administrator password, that the device itself would say, oh, when you set up your password, you only gave me a six character password and rejected outright instead of, you know, just kind of letting that go and recor, and actually putting it on the end users to implement that. Right. And so that is something that, you know, one of the original requirements that got a little bit of flak was they actually had in an early draft requirement that specifically called out, um, that specifically called out the use of, the use of like preventing password based, or dictionary based passwords. And so one of the concerns obviously was, you know, all the vendors were like, well, we don't want to have to implement that. So it was obviously a concern from that perspective. And obviously from a testing perspective, it's like, well, we're supposed to implement, you know, attempt to enter, you know, a couple of hundred passwords all using dictionary words and see if we get rejected or not. So there's definitely a scoping issue with testing on that. And it's one of those kind of ongoing issues with, um, with the validation process. That's it. Okay. So we're out of time. If you have any other questions, we'll be in the Q and A room. I'll be with you. So it's a Miranda one. So it's right back, um, back where we all came from. So.