 So, thank you for coming to this session. This talk has a bit of history. Last year I was supposed to give roughly the same talk at Samba XP conference. And I lost my voice a day before the talk happened. Or it did not happen because I lost my voice. Likely it was a remote conference, so I didn't need to give a talk or travel somewhere without voice. Fast forward. This year we got a repeating set of the problems that prompted me to give this talk again at Samba XP. This time it was in May and offline, so I came actually there, gave this talk. And DEF CON talk is a variation of that talk because we actually got some news in meantime and some fixes in meantime. So that's great, so you get new stuff we didn't have even a month ago. So, who we are? We both from Red Hat, we work on Red Hat Enterprise Linux actually on the Red Hat Identity Management. Which includes all things that you typically associate with enterprise environments. No, we are not the guys who fix your printer, even though we do know how to fix printers. But we are the guys who fix your active directory compatibility, Kerberos and all related stuff. Julian is maintaining MIT Kerberos at Red Hat. I'm doing bunch of stuff related to free IPA mostly these days, but also Samba along the other actual Samba maintainers at Red Hat. And the most important part here is that we try to represent our experience as application developers compared to crypto module developers and regulatory body that establish certain requirements and standards. So this talk is about our journey in the world of FIPS 140. This is just a reminder, I'm sure most of the people in this room know what FIPS 140 is. But it's like almost 30 years right now for the introduction of this set of specifications. And the previous 140-2 standards was accepted in 2001. So in fast forward to 2022 we have 140-3 as a set of requirements towards all crypto in the computing systems used in the government environment. But it's not only government or environments, it's also everyone who wants to have a bit more secure requirements towards their systems. So if we compare with Red Hat Enterprise Linux 8 this was the state in rally with the FIPS 140-2. There were only 5 crypto modules certified and a giant amount of work was done to reroute all the other applications from implementing their own crypto to actually use these modules that were then certified. So when the modules certified and applications don't implement their own crypto and use the certified crypto they can work in the FIPS environment and be compliant with at least some of these regulations. The actual reality is much more complex. You have to take the business requirements into account in addition to the technical limitations and the actual regulatory mandates. So in many cases you might have something that deviates the actual requirements but it's great to be in the FIPS 4-liter business and the technical things. We as a vendor in many cases have to provide means to enable or disable what we sort of aim into providing strict FIPS compatibility in the defaults. And this is the current state of FIPS 140-3 in real life. You will see that there is this cryptographic module validation program. There are implementations under test. If you go by that link you will get a huge set of modules from different vendors. Actually a lot of different vendors. So Red Hat has again these five submissions. We mostly focus here in this talk on our experiences with OpenSL FIPS provider and submitted by Red Hat. But you can see or notice that it's already here in the implementation under test steps. And the 140-3 edition has a lot of changes. So from our perspective there's a lot of deprecated functionality you cannot use. Sometimes these changes apply or be advertiser already after you submitted the module for the other certifications. So for example FIPS 186-5 removes DSA completely. It's not there anymore. It's published in February. So half a year past the module was submitted. 140-180-4 is being revised. It's still not fully revised to remove SHA-1. The announcement came also I think in February for that one. In general some initial work is being done by NIST and other bodies to enable these standards being really implemented. By introducing RFCs, changes and so on, so like TLS ciphers and so on. But this only applies to the kind of major ones. If you're working with the protocols that are out of scope for these regulatory bodies then you get an interesting situation. So like with the Kerberos, the encryption types were updated by NSA proposal but all related other RFCs around Kerberos weren't updated. So we get some conflicts there. And again I should remind that compliance is literally what a customer running a system makes with their FIPS auditor and vendors involved. It's not like a technical state known in advance. So if we're looking to the SHA-1 transition announcement then we can see that the requirement is to get SHA-1 gone by the end of 2030. But the reality is that the guidance is given right now. So if you have a certified module then that certified module now is, well there are none at least of what we care about. They all implementations under test. And laboratories basically say these guidelines apply now not in 2030. You have time to prepare but if we certify modules we have to do it now. Which means all laboratories have some different feeling towards these guidance. And the most important for us is, okay, SHA-1 is not allowed. It affects all the crypto, specifically in the Kerberos case. Crypto modules cannot instantiate kind of non-well known curves whenever this is needed. Certain API might be asked to be removed completely. The whole certification process is literally throw a module for investigation, get feedback and repeat multiple times. Sometimes it takes months, sometimes days to get the feedback. And it's always interesting to find the problems literally close to when your product has to be released. And then you have to do some, I don't know, fire burns or something with all of this. So the reality that we have now is that if you take the strict understanding of 140-3 then you cannot interoperate with Active Directory period. This is not possible because there are no overlapping cryptography primitives that could be used at all. So Active Directory only supports the ciphers in Kerberos from the RFC 3962 which is using the Kerberos key derivation function that is not allowed anymore. Then on the other side they all use SHA-1 which is being asked not to be allowed anymore. And okay, you could use it to verify legacy signatures but we cannot really put this into the world of legacy signatures because they were not generated like years ago. These are signatures generated as a part of connection establishment for example right now. So strictly speaking you cannot apply any exceptions here. Okay, so the game is over. I can end this talk and you can spend the rest of Sunday Sunday somewhere else. Well, we come back to the interesting story that happened two months ago. So Microsoft actually submitted their own implementation on the test. There's a bunch of crypt modules that they have to do. And obviously nothing about this talks about Active Directory because these are like in our case actual crypto modules that Kerberos part of Active Directory will use to implement their thing. So Kerberos part is basically application on top of this. But they don't have FIPS 140-3 certified crypto modules yet neither us. So we all work in preparation. In addition to that, on the same day by the way if you notice this is April 14th for the majority of it, on the same day leading Kerberos developer in Microsoft writes a blog with quite interesting content which boils down to hey, we need to rewrite the entire crypto stack in Kerberos implementation and Windows. And he finally mentions these RFC 8009 which is the encryption types for Kerberos, allow it in FIPS 140-3. So great, there's something, finally something that gives us a bit of hope. And from this perspective we at least can expect that our future will be bright finally, really. I hope there's too much darkness right around. But of course we live in present and back to the present we have the funny question is how all of this is enforced. So you have crypto modules which are in our case libraries that some other applications link against. These are pretty complex libraries. The cells, the API they provide have certain semantics and so on. And you can configure these libraries to apply certain things, how we manage to make it coherent, consistent in what is supposed to be expressed by the regulatory bodies. So the easiest answer is we try to isolate it all in kind of a system-wide configuration. This is not a new topic. It's existing in Red Hat Enterprise Linux and Fedora and other downstream distributions for quite a while. And the crypto policies project is effectively defining a set of rules, a nice set of rules that allows you to generate a bunch of the configuration files for these crypto libraries to apply a consistent set of rules. And it also allows the distributions have these consistent set of rules not necessarily be the same. So for example what is default or FIPS in RHEL is not necessarily the same as default in Fedora. This is at this point because Fedora has some community requirements and RHEL has some business requirements that not always align. That's fine. That's what these policies are for. There's a bunch of them and they used already in multiple places. This is just an example of how test outputs look at them. And these test outputs, they effectively are generated configurations already. I'm not showing the original configuration, I'm showing what is generated and then loaded into the applications when the library is being initialized. These libraries, these policies, they have a way to tune them. So you can have a main policy and then can add or remove certain things within context of that policy. So for example the names here, they are just names. Behind these names there are small snippets of the configuration called sub-policy. So for example this AD support in RHEL9 means enabling encryption types that Active Directory understands. In RHEL8 it means also enabling all the encryption types that Active Directory understands. But in RHEL9 it doesn't contain RC4 ciphers for example. NoSHA1 is kind of what Fedora ships. By default the default configuration in Fedora enables SHA1 and you can add sub-police in NoSHA1 to disable that. This NoSHA1 sub-police would make no difference in RHEL9 because it's already NoSHA1. And you can combine, you can actually apply multiple of those and have something. How these multiple sub-polices apply configuration matters or means something is really up to the business, up to the customer and their, if this is FIPS environment, their FIPS auditor to interpret and analyze. We help them by providing means but we cannot really say that they are compliant in the end. We just provide means to be compliant. So just one example. When you get default crypto-policy on RHEL9 you have this Kerberos configuration that has some permitted encryption types. These include encryption types from RFC3962, which means SHA1 HMAC-based ones. And also the ones from RFC809, so the ones based on SHA2. While if you take the FIPS policy you get only two encryption types from RFC809. You define that the well-behaved application will only see these encryption types and only request them and therefore only operate on these encryption types. This is kind of what system-wide level provides. The applications still need to do something to operate in this environment because if you get permitted encryption types that you don't understand, from your perspective there's nothing to work with. So the application needs to do something to kind of leave there, right? So how this goes on the application side is that it's roughly, in most cases it's transparent. So you initialize a crypto-library, a crypto-library loads these configurations and then loads all the providers or whatever and applies these defaults. The application might change some or explicitly request certain things but if the crypto-module does not have implementation for a particular crypto-primitive then nothing can be done, of course. So in many cases things are transparent but failure is also sort of transparent. This is a new thing, for example, in FIPS 140-3, the indicator API. It's sort of a requirement that crypto-modules have to implement. Not all of them yet implemented it. I know that NSS has merged this API. For open SSL discussion is still open how it should look like and how it would be used but there are always concerns that existing applications might fail simply because they're not prepared to query this additional information and it's easy if it's implicit, like if you called something, something failed, you bailed out early, nothing works, that's fine. Fine in a sense that you see the problem early enough, you cannot log in over SSH and that's all you get. But if for the explicit one, if application needs to be modified, it needs to have knowledge of the API to call and if there's no API then of course that's a bit different question. So in general we can split all of these modernization and things that happen at the application level in three broad categories. One of them is really, it's not a FIPS related, it's a really modernization of how application handle their crypto operations. It's what we call algorithm agility. It's the same for being prepared for post quantum things, same for FIPS. Basically you have to be able to negotiate certain operations using certain... Be prepared that the actual algorithms might change, be prepared to negotiate these changes between client and server in a way that doesn't break you if these algorithms change. Ideally as long as your libraries, your crypto modules provide these primitives and you are able to discover and negotiate them and operate on them, ideally should be good. The truth is that it's not always where we are. The other part is defaults. It's nice to have self-adjusting defaults. So crypto policies is one of possible mechanisms to give you self-adjusting defaults. If the policy profile changes, then application kind of automatically adjusts to not accept what's not accepted in the policy. But then we get to the trouble of how to migrate data that was created before the policy change happened. And all of those three in the broad strokes, they all have nightmares, they all have happy days, and they all have probably a business that failed to operate. And a lot of pressure from the people who upgrade it and nothing works anymore. So as usual, planning for that is the key, and planning in advance is the key as well. I just want you to remember that the FIPS 140-2 was accepted 21, 22 years ago. So we happily forgot how this all work happened. And we as computers, society, IT, technology, companies, open source communities and so on, we do have work to do. And we forgot how we did that work in the past, because we changed the generations of developers, we changed the technologies and so on. And sometimes it's too late to come with this, hence urgency to fix. Sometimes this is probably we have some time until December 31, 2032 to get rid of Shaw-1 in all remaining obscure corners of our software. But of course what we're doing now is basically fixing the broad strokes. Like whatever we can cover as much, but then there are still things missing or be discovered only when you actually have to do that work. So with the application modernization and the algorithm agility, the main problem is not changing the code. The main problem is agreeing where we are going and how we will be changing. In most cases, these things start with protocols. Protocols they often defined by some consensus across the industry, across different, again, bodies, ITF and W3C and all the others. Changing standards is a slow process, especially in the crypto area. Updating RFCs to remove certain things is long time. Even if everybody agrees that certain things needs to be moved on, it still takes time to do that. So this is part of what you need to do, just not just changing the code, you also have to upgrade specifications and work on it. And sadly in many cases this part is forgotten. Maybe in the hope that somebody else does it. It's not my job, right? That's a common problem. We always forget about that. Us as a community of professionals have to do this. So the other problem is, sure, adjusting implementation takes a lot of time. For any submission in this kind of crypto area, you have to do a lot of analysis of the code, just purely to not sleep errors and security problems as a part of that. Then adjusting it in a such way that the old deployments can at least be migrated to new ones or at least can interoperate as a part of it, maybe with the back ports to the old code base and so on. This blog by Steve Seifers also had a small note in it when somebody asked, is Microsoft working on these changes that they will be back ported to Windows Server 2012, 2060 and 2022 or it will be a purely what they call Vnext, so next major version. The answer is we work for next major version. So how we do the back ports is unclear. We meaning those developers. And of course new deployments like REL9 have to accept existing working systems that don't have these new implementations. So that's the set of challenges we deal with or have to deal with. And there is one example we want to show you today. It's how we handled it so far with the Kerberos PKI init protocol extension. This is effectively using smart cards to authenticate over Kerberos. So Julian, please take over. Yes, init actually, the PKI init is basically an extension for Kerberos protocol for certificate authentication and it's indeed a good example of the kind of trouble you might run into where you're trying to comply with a new fit restriction. So it's basically an issue of algorithm agility and especially in two areas, the signature types like the ashing function that I use is in signature types and the parameter groups that I use for Diffie, Elman key exchange processes. Basically they are three of them that are standardized in the RFC. Group 2, Group 14 and Group 16, the two first ones being mandatory to implement. And even when you have standardized in the RFC that are sufficient to comply with a fixed restriction, it may actually not be enough because there are sometimes areas in the RFC that are not ambiguous in the way they are defined. So it tends to create different understanding of the RFC depending on the vendor implementing these libraries. This has been typically the case for PKI init with algorithm agility for signatures. The RFC has been understood differently by the OpenSSL team and by the Heimdall team for the implementation which resulted basically in some cases where interoperability was not just possible. And basically even when there are RFC available that provide somewhere to comply with a fixed requirement, the issue is all the implementation, they don't necessarily implement all of them because usually there's one RFC and then newer RFC are at the top of it to add new mechanism. And this is an issue we face in Kebros because we have MIT implementation, the Heimdall one and the Active Directory one which also has this whole set of specifications that are different from the RFC ones. So we basically meet all kinds of issues because of that. The fact for example that algorithm agility is not fully implemented for signatures in MIT Kebros, still in PKI init. There is no support for ECC certificates at the moment. And on the AD side we are still limited to SHA1 signatures for PKI init because that's all they support for RFC keys in the process of Diffie element key exchange. And something else we are kind of concerned about is that we currently have two slightly different implementation of OpenSSL FIPS provider, the upstream one and the downstream rail one and we suspect we might eventually have other implementation from different distributions. And this is something we are worried because we might end up with a situation where it's difficult for our system administrator to figure out what is the proper configuration they should apply for their environment because there might be different recommendations in case of certification law they don't come up with the same requirements like basically they have different implementation of the understanding of the FIPS standard. Yeah and the interesting part here is that if you look into the implementation of the test cryptomodules list you will find that every single rail downstream rebuilt like Rocky, Alma, every single other loose distribution from Canonical, from Suzy, they all have submitted their own cryptomodules for this certification which means that even if they have maybe a similar code base I'm sure that they will have differences because they use maybe different labs and they have different state of the patches that they apply. Hopefully they actually check each other whatever is available and try to kind of synchronize. But in my experience there's also kind of tendency not to look at the application level problems until you get actual things certified and then people discover these problems. So I'll go through some more practical examples of what has done wrong basically in the process of complaining with the FIPS restriction. We went through a few issues we've aimed at. One of them was the support for the well-known groups for Diffie-Elman Key Exchange which is part of the Picainlit process. The thing is the group 2 is not actually supported by the OpenSSL library because it's considered too weak as a crypto basically it's still used as a default by Heimdall for Picainlit. So this has caused some interoperability issue we basically fix right now by recommending to change the configuration on the Heimdall side. There's also of course the SHA-1 signature issue. At the moment Windows implementation of Kebros does not support any newer version of SHA for Heimdall Key Exchange as part of Diffie-Elman. We hope we might be able to fix that by supporting elliptic cryptography for Diffie-Elman in the future. It's currently being worked on upstream. And this is an issue we also have for the previous implementation of Kebros on all the versions of RHEL because I mentioned it earlier there's still not proper algorithm agility in MIT Kebros for signature so it's still basically outcoded in the code. So far we upgraded to SHA-1 signature but all the versions will still produce SHA-1 signature so you have to verify in case you still have this host on your environment. This is also a few words about how it's supposed to be implemented signature algorithm agility as part of Picainlit. There is this supported CMS type attribute which is supposed to basically provide a list of all the signature algorithms that are supported by a certain agent actually. And so the issue with MIT Kebros right now is it will always generate a SHA-2 signature. It will advertise the fact. It understands the SHA-2 signature but it will not take the supported CMS type attribute from other agents into account. This is basically an explanation of how we deal with the fact we have to integrate some exception in the 5th Crypto Policy for Kebros in case you still want to achieve interoperability with Active Directory until IS SHA-2 is actually implemented by Microsoft. The issue is this algorithm like SHA-1 or other, they are not part of the fifth provider which is the one that is available by default when you use OpenSSL. So the approach simply is to use liberally context and load the provider that is making the algorithm you need available. So this is in fact bypassing the limitation of the provider but this is still controlled by the Crypto Library. That's what these modules like AD Support or AD Support Legacy are for. They basically modify the Kebros configuration and basically allow to use this bypass method to access this algorithm. So a few words about the incoming version of Kebros 1.21. It's actually already released. So it adds a few things like basically upstream is actually quite cooperative in the not FIP support but at least accepting some modification in the codebase for making our life easier in the process of complying with it. So for example they allowed us to make the importing of some well-known groups optional like typically the group 2 which is not provided by OpenSSL. It has been causing some issue because the beginning module would just fail to load in case it was not possible to load this group so they accepted to do this kind of modification to allow us to plug in to work on the less. So Alexander will be giving some details about encryption type compatibility between Active Directory and MIT cameras. So this is so far was just PK in it which is admittedly one of the complex parts of this thing but there's another one so specific to Active Directory. Active Directory issues, Kebros tickets that have additional information. That information is really crucial to have the environment where basically your Kebros identity tied to your system identity. They were in past two years a number of issues, security problems that cause it specifically break through in the environments where you have Active Directory and Linux systems working with each other. Features of Active Directory allow for example to each user to enroll machines like enroll in additional machines there is a quarter of like 10 machines or so. So some users find out that if they call a machine as a Unix user then a certain behaviors within Active Directory allow them to create machine with let's say name root and then use identity of this machine to log in as root on any other Linux machine enrolled in Active Directory. So to fix this kind of problems Microsoft added additional check zooms that over certain fields in the Kebros packet and fixed some parts on their side. We're working together Samba team, Microsoft, Red Hat and few other vendors working together to fix this more than a year released it in I think 2021 in November. Then after that some researcher found out that while some of these check zooms, a different check zoom can be attacked with pre-imaging attack. And that kind of pre-imaging attack existed there for 15 years. Nobody noticed it when the specification was written. So Microsoft came again with a new release in November last year, introducing yet another check zoom. While working on fixing some of these things we found out that hey, we switched it to, on free API side we switched to use the SHA-2 encryption types and that means these check zooms will be done with the SHA-2 signatures. And suddenly Microsoft servers started to reject these tickets. We reported to Microsoft and apparently they only check, not the check zoom, they check the size of the buffer. If that size of the buffer is different from what they expect for SHA-1 check zoom, it's a failure for them. So when we reported they were working already on introduction of the new check zoom and some other semantic changes. After they released these fixes, we found out that they silently fixed this problem as well. So now we can apply this check zoom, but at that time we already added functionality, which is part of 121 release, to allow KDC to tell or hint that hey, when you're talking to Active Directory, you can change the encryption type to a different one, just for that one. So that you can keep being interoperable. This will not be needed in the case when they finally support the RFC 809, because well, then we can use SHA-2 basic encryption types. But still for the older systems we have to keep this. Maybe these older systems will not leave beyond November of this year when Microsoft announced that they want everyone to switch to the new builds, which introduced this new check zoom that prevents the pre-imaging attack. But these are things that we have to deal. This is not kind of FIPS, but it's driven by our investigation how to make it all working in the FIPS mode. And then if we move forward, we'll already work at like for five years with Microsoft. They have plenty of different remote procedure calls implementing all kinds of operations, like creating user assets in a password on the user, replacing certain things, mutual authentication between machines and domain controls and so on. Most of that was still using RC4 in the operations. So some of these operations changed semantics. Well, they introduced new versions of those operations, basically requiring AS-based session key. Then using that AS-based session to operate with whatever is there, which is I think a type of activity that is allowed in the FIPS because you have a secure channel using the approved cryptography and what's inside this channel it's considered plain text and of operation. But still, for some of them, we still need access to RC4 just to perform some of that plain text operation. And then we also don't want to leak encrypted material, the passwords and so on, then creating it. So had to refactor some but to change the code in a such way that it generates cryptographically strong passwords, for example, for trust between different AD domains and never leaks it to the application. So it's always this library handles it and ensures that whatever is handled is always handled under AS-based session. So it's still not enough for FIPS 140.3 but we really hope that the work they are doing refactoring their Kerberos crypto stack will result in more changes. And of course on some side we cannot do more until they do publish these specs and actually do this work. So we are still in the process of doing that. Now to the defaults. New installations are kind of easy. Defaults, new installations basically take them. And for example, an IPA changed the default installation using the master key for Kerberos with the new encryption types. Fine works, okay. The same encryption types as before, default policy blocks them so KDC instantly kind of not allowing to use them. But for the new installation, you don't have any old keys at all. You have users being created or machines enrolled, you get new stuff generated. And for the kind of migration environment, if users had old keys and old passwords, the change of the password will use new scheme. It automatically upgrade the hashes. But there are a few things that kind of fell through on this. So for example, free IPA supports one-time password. So using the HTTP or TOTP tokens, even software tokens to use. And the standard still says SHA-1 is there. SHA-1 in this context using SHA-1 is okay because it's not cryptographical operation that matters here. But you cannot load SHA-1 from the FIPS provider. So you cannot really use it. You need to do this dense of loading in other provider in a different context to be able to operate it. And even then it's not available if the default policy is not allowing you to use it by default. So the worst part here is not that we cannot fix our side, we can. It's that the other software will not work. So Google Authenticator only understands SHA-1-based tokens. If you switch to a different and people are switching to a different tokens using SHA-256 or 512 and so on, they cannot use Google Authenticator. And usually now everybody else is kind of implementing a wider spectrum because it's easy to have a different SHA functions hashes used there. So these kind of things, they appear from the field testing. Finally, data migration. So if you have old systems and you add new systems, you're supposed to have business continuity. The replicated data from the old system kind of continues to be useful in the new ones. So on Rel8, if you have a system installed with InfIPS mode, with IPA, we don't ship some by AD. So I'm using IPA example here. If you add Rel9, InfIPS 143, in general, you shouldn't expect things working, but they kind of not working, right? Because what was allowed in 140-2 is not allowed anymore in 140-3. So in case of Samba, which is more relevant to other distributions and Fedora and Debian and so on, they actually have actual plain text of the password encrypted in the database. Samba uses GPGA to wrap all these blobs and so on. So they always stored encrypted, but they have access to that. So administrators can regenerate keys for the users using new materials. It's all sort of possible to do well when Microsoft introduces support for those new encryption types because they have to be compatible with the Windows machines and Windows machines are not supporting them yet. But in this imaginary case, yes, you can do all of this stuff because you have all the materials. So for the domain members, so machines that enroll, they actually have their plain text credentials encrypted on the machine so they can regenerate and usually authenticate with the domain controllers. In the free IPA case, it's more interesting because first we don't have plain text passwords anywhere. On the Kerberos level, we have Kerberos encryption keys and on the LDAP passwords, we actually have hashes. So there's nothing to generate from. Of course, there is a mechanism to transparently upgrade from LDAP bind where in LDAP bind you can take a plain text password and apply the older scheme, see that the hashes are actually compatible, compare them and so on, and then re-encrypt. This is what we're doing already in the plain Red Hat directory server case, so it's possible. But the problem here is that free IPA supports more than passwords. Free IPA supports non-password-based authentication, which you cannot do over LDAP. So token-based OTP, HTTP is supported, but we are working on... Well, smart cards are not supported, they're supported only through the Kerberos, so you have to solve it there. But luckily, a smart card means you don't have password. You have a cryptographic device and possibly a pin there, and pin applies to the device, not to the something here. So this problem is not existing there at all. And we are working on introducing FIDA 2 support, so we're both authentication. And that means these kind of keys will also be supported. And they also don't have passwords, so that's the best way. Maybe at this point, you just start thinking about migrating all of your users into non-password-based authentication and solve it. But of course, to migrate the old system or to apply these changes and keep the old system working, you have to extend policies. And this is where sub-polices become really crucial. You can extend the sub-policy, do migration, because this allows to accept the old keys and so on. Migrated everything, changed the policy back, restarted the server, you are now in the new world. Of course, that has to be FIPS from scratch on the old systems, FIPS from scratch on the new systems. But it at least provides you a path forward. So, on the client side, it's also a bit easier because you have host keys, you can rotate them, you can automate things. Again, the same system-wide cryptopolises applied there consistently. Well, if you have in-use distributions where you don't have cryptopolises, then my suggestion is to work with the maintainers of those distributions to really raise your needs for them as developers and also as users. Because to me, frankly, this is one of the best non-upstreamed yet extensions that Red Hat Crypto team made. And then the other thing is you can switch to the certificates. Free IPA supports enrollment using certificates now, so you can use instead of one-time passwords or passwords of admins and so on, you can use PKI init, base it, enrollment of the systems, re-enroll the system and you get everything in place. And you can use this also to rotate the keys. There's a pretty flexible way of achieving this. You can get quite well with this. So, that's literally all we have. Of course, there's a lot of work to do and the focus here is more on getting you a bit of light into what isn't those darkened forests and any questions. So, the question is, has there been any discussion about FIPS requirements with Microsoft, you say it? Yeah, or the other industry parties. Has this discussion been public or private? Yes and no. So, we try to discuss in many places. For example, for Kerberos, there is a kit and work group at ITF that handles the protocol evolution and things. And Microsoft came forward recently there with some of their concerns and they were discussing, not related to FIPS directly, but you can deduce from the discussion that they concern it on this migration thingy. A major concern for Microsoft is that they still have to support configurations where they don't use Kerberos, where NTLM is being like a fallback and they cannot easily disable that fallback and they want to design something new there. There are some discussions that we do, in particular Julian in the OpenSSL upstream, Heimdall, Kerberos upstream with the MIT upstream. This is all in public, in the pull requests or issues on the project sites. So, this part. Julian works also on the updates in the RFCs for these modP group requirements. Hopefully, we will get this forward. So, Clemens adds a comment that there is a cryptographic model users forum. I think it's on GitHub, right? It's on some page. This is the place where NIST and LAPS and all other module implementers collaborate. It's the public one. If nobody has died, it will stay in this situation forever. Thanks. Exactly. Dmitry makes a comment that disagreements on interpretation of RFCs should ultimately end in the working groups by producing fixes and modifications to those RFCs so that they are actually readable. And I fully support that. And I should refer to my slide where I write that this takes years and we probably should start yesterday with this clarification site. I also should clarify why we care about Heimdall while we do not ship Heimdall. For us, Heimdall represents a client. Mac OS uses Heimdall as its Kerberos implementation. And if you don't have Mac OS systems working against your server, then you get calls from customers, right? How agile is it? For me, the next thing we're going to be doing is Shaman's walk in the park. Post-quantum. But fortunately Kerberos only cares about PK and NIST for post-quantum. You don't have to be signature, right? In the main part of the protocol? Yes. Can you answer? So the specs are... The question is how agile is agility in the PK-ANIT RFCs? The answer is not much. They establish some level of flexibility among the explicitly specified algorithms. Not the new ones. So some work on extending them will be needed. With the context of post-quantum, this also means that some work will be needed on actually defining what certificates mean in terms of post-quantum crypto. And that is quite unclear at this moment because the whole story with what certificate chains would be, what format and what not, it's all unclear. I guess all this work on specifying that is also waiting for more definitive answers from regulatory bodies. But they're telling everybody now because you're going to flood something in it. Yes, and a comment from Bob is that a general guidance is to extend as much as possible the RFCs to accept algorithms and whatever is there, even though the exact algorithms for signatures or certificates are not fully set in stone yet. I fully agree with that. Just adding something. There are some mechanisms in place for agility. The main issue is that in the PK-ANIT case that there are actually three RFCs, the original one that includes support for RSA with SHA-1 signature. There was a second one that were adding new hash functions, so basically the SHA-2 function. And there is also the Diffie-Elliman electric curve support for PK-ANIT. The thing is that Microsoft basically skipped the implementation of the second one, adding new version of SHA-2 for RSA-based signature. So they moved right to the ECDH one. So this is why we are in this case right now where in MIT cables we only have support for the RSA-based signature. So the only one we have in common with Microsoft at this point is the RSA with SHA-1. So this is what this whole issue is about. But we currently work downstream for supporting the ECDH RFCs, so hopefully we can fix this issue relatively soon. Okay. Other questions? Yeah, so you mentioned on one of the slides that TOTP and HOTP were using SHA-1 and for that reason you couldn't use them anymore. Small correction on that, that's not entirely correct. You can use SHA-1 for hashing, not for hashing that's being used in signatures, which is in the case in TOTP as far as I know. But that's, you know, a time thing by 2030, SHA-1 will be completely... Yeah, I hope we get there easy. Yes. The correction was that strictly speaking, FIPS 140-3 allows to use SHA-1 in the way how TOTP and HOTP is using it. I think our main trouble is that we do not have access to SHA-1 through the EVP. If it still works for hashing, it doesn't work for signatures. Okay, so it should be fine, but it doesn't right now. Lisa is a buck. Yeah, yeah. But that's what we see. So customers come back with brave new deployments and they finally found out that there's something not working. Okay, start investigating. It boils down to certain environment. In this case it wasn't even FIPS, it was SHA-1 disabled by default in rally. Yeah. Rally nine. In the bug I submitted a fact. That's a different one. There's a bunch of... If it works and if you have a different mode, it doesn't. Okay. Yeah, the comment was that just a bug submitted on Friday around a similar topic where there is a difference between FIPS and non-FIPS, but FIPS works, non-FIPS doesn't. So, yeah, as you can see, this is really a fluid and agile area with problems coming and going in a matter of days. Overall, I think we are almost done. Two minutes to end. Any more questions? Yep. I have a question. When is it rejected? So, the question is, I mentioned that OpenSSL has indicators. I actually mentioned that indicators are not fully implemented in upstream OpenSSL. The pull request is still open, but the implementation of indicators is a requirement of FIPS 140-3. So, I expect them to appear in some form. Regardless... And the question is which one we prefer. It really depends on the situation. I cannot answer in advance. In the case of Kerberos, we've been relying on implicit ones. So, if we get rejection, we try another path by loading a separate context and loading a separate provider in the case that we know this is the problem. For the explicit one, I need to find a specific case where this will be absolutely required. But I know that there are cases like, I think, SSH server negotiation. But I'm not the person to go into details on that. And we're done.