 Okay. Hello. My name is Reinhard Bündgen. I'm the first one of the three. I have to get, give some credits to my colleagues, Holger and Ingo, who did the actual development behind what I'm talking to you today. Not sure. Just a show of hand. Who understands the title? Actually, few. Great. Because it's a completely, complete shift of gears. We are now going more in the field of cryptography and protocols. And, well, it's a mix of technologies that not everybody probably knows about. I make sure that you will understand if you didn't show your hands during the presentation. So, what's the challenge that we want to work on, that we want to resolve? It's the challenge of identity theft in a network connection. In particular, TLS, and in more particular, TLS implemented with OpenSSL3.0. And what can happen? Well, a TLS server typically signs his data, his keys, and so on, in order to prove who he is. But that signing, well, is done with a key. And that key is now very valuable. If someone happens to steal that key, he can steal the identity and pretend he is a right server. That would be really nasty if the server is serving a bank account, for example. So that is what we are talking about. And stealing keys, there's a technology to defend against this. This technology is typically called HSMs, hardware security modules. And those hardware security modules, or most of them, can be programmed against using a standard. And that standard is called PKC11. Okay. Again, the picture here is the client he wants to connect to a server. The server identifies itself with a private key. And if that private key gets stolen and the traffic gets rerouted, you might talk to the wrong server. Clearly, if you're doing something like MTLS, mutually, with mutual authentication, same problem exists for the client. Okay. What is the hardware security module? The hardware security module is a device who doesn't have a processor hardware security module? I'm surprised. No credit card whatsoever. It's a device that does cryptographical operations on protected secrets in a way, sorry, my voice is fading, in a way that protects the keys. So it's typically cards, crypto cards, but also small devices like smart cards. There are good HSMs who would even rather destroy themselves or all data hidden in the HSM rather than letting this data extract. And they are very often certified using the FIPS 140 standard according to levels greater than two. Software is typically only certified against using level one. And how does this work? These hardware contain secrets. That secret can be a single master key that is used to encrypt other keys. And then with the encrypted, the wrapped key that is available to the operating system, the software has to send both the encrypted key and all the data to be worked on to the HSM. Or the operational keys could be stored in themselves in the HSM. Then you have, of course, only limited space for such operational space and the software only sends a handle to that key, an index or whatever, to the HSM together with the data to be operated on. A little bit of PKC is 11 technology. That is the most popular standard to work with HSMs. Clearly HSM vendors also have their proprietary interfaces very often, but it looks like a nice common denominator for many vendors. And if you do have something, you might even have a rapper, the PKC is 11 rapper to talk to your HSM. So in HSM, all key objects come with attributes. The most important ones are the sensitive attribute and the extractable attribute. Sensitive means you cannot look into the value if sensitive is true. Extractable means there is no way to get that value out of the HSM. This is not exportable. But getting the value out of the HSM doesn't mean to get it in the clear, but to wrap it with some keys in order to exchange it, for example, for letting two HSMs communicate among each other. Okay. Operational operation, those are the operational functions are typically of course encrypt and decrypt, wrap and unwrap and derive and of course sign and verify. Encrypt and encrypt is meant to do encrypt operation on data. Wrap and unwrap are encrypt operations on keys. And they are meant as an export operation. So wrap exports a key and unwraps imports a wrapped key into the HSM. This is the idea of these two operations. Derive is of course something like ECDH or KDF function. Even so, the standard allows many combinations. There is something about the derive function that HSM implementers typically do. They constrain that if the input keys to be derived from which another key shall be derived, if those keys are sensitive, then the output key is sensitive. If they would allow both, it would make the goal of the HSM just go away. And even so, the standard would allow to derive a non-sensitive key from a sensitive private and public, well, public keys are always a non-sensitive, sensitive private key. HSM vendors wouldn't allow this in the hardware. They wouldn't get it certified. Okay. Just to be on the safe side, a short overview on a protocol like TLS, it starts up with a handshake where the keys to be used in the future communication, which are typically symmetric keys for bulk encryption are exchanged. And the server signs its key material with its signing key. And if you have a mutual authentication, it does the same. But most connections are not mutual TLS connections. So what's happening is that in the upper part, in the handshake part of the protocol, public, private, asymmetric cryptography is involved. Two types of asymmetric cryptography typically. The signing type of cryptography and key derivation, be it RSA, key exchange, or an ECDH key exchange. Okay. Now, let's look at the different key types that are involved in such a key exchange. First of all, we have the private signing keys. There are long lift keys. The location in memory can be, if that location where these keys reside can be extracted. Just remember Heart Lead, which was one of such vulnerability. Such a key could be extracted and stolen. And, well, this key also must be stored on media somewhere because, well, after rebooting your server, you want to not get all the sudden, completely new identity. So there must be a way to store this key safely, securely. Then we have private keys for key exchange. If you're using an RSA key exchange, that key is typically also long lift. I think most TLS implementation uses the same RSA key for signing and for the wrapping. For ECDH, well, typically TLS, in particular TLS 1.3, doesn't use proper TLS, ECDH, or Diffie-Hellman, but it uses, if you marry a variant of it and it's strongly recommended to use that, these are short lift keys. They're only generated at the beginning of a connection and a new one at the end of a connection. Why so? Well, generating ECDH keys is fast, easy. It's just a random number plus a multiplication. And for RSA key generation is very heavy time consuming. Therefore, there's no, if you marry RSA, even so one could consider this. And last but not least, we have the symmetric keys involved. These are the ones that have been derived. So they are as, if they marry as the ECDH keys. And on the other hand, for those keys performance matters, because we might encrypt a lot of data. The handshake is only done once. So if you have lots of, lots of connections, then performance of the handshake may matter if the connections are short. But typically the performance of the symmetrical keys really matter. Why is performance an issue? Well, asymmetric keys are slow, and if you go to an HSM, an HSM is slow. If you want to back down your communication, run everything, each AES computation on an HSM. Because it involves an IO to an external device. And therefore we have more or less two groups of keys, keys that I call high-risk keys, where I strongly recommend HSM protections and keys that I consider low key because the damage, if those keys get lost, keys are always valuable. But some are more valuable, others less valuable. They have a limited risk and limited impact. And I think it's perfectly fine to use them as context keys and have them in your memory while your connection is open. Okay. A little bit now we want to combine OpenSSL and PKC is 11. And we have to look a little bit how things match. With OpenSSL, asymmetric keys are represented by byte strings that are as long as a key is strong. So if I have an AES 128 key, it's 128 bits long. It's as long as a minimal mathematical representation of this key. For asymmetric keys, it's a little bit different. There we have something like an opaque object that represents the key which is typically structured and must be something more complex anyway. And there is a way to represent those asymmetric keys in a key store. As for PKC is 11, both symmetric keys and asymmetric keys are represented as objects because they not only contain the key material but also the attributes. Some of which we have looked at like the sensitive or extractable attribute. But there are others that restrict what the key may be used for. So there are keys. You can define a key that is only used for wrapping or key is only used for signing. Stuff like that. Okay. OpenSSL is not meant to use HSM. It's a normal plain text key library. It's a good one. I'm not, that's no criticism at all. It's the defective standard, I guess, of crypto if you so want. It's the most popular crypto library there is currently but it was always prepared for plugins. Before openSSL version 3.0 was released, the plugin mechanism was called engine. It's such an engine that it could provide a replacement of a function implementation. Since openSSL 3.0, that plugin format has changed. The name has also changed. We now have a provider and the provider has a different goal. It implements more or less an abstract data type or an object and it has comes with various requirements that for this key data type, it must implement export and import method and these are not HSM export and import methods but methods that allow to exchange objects between different provider types. And it must implement all functions and methods that this key can be used for in openSSL. And we will see these constraints cause some problems in implementing a good pksc11 provider. Another concept that we need is a so-called openSSL key store. It contains key objects that depend on the external storage format. For example, there is an RFC that defines a URI to refer to a pksc11 key and so you can have a key store that contains keys that are represented by such an URI. And the keys in such a key store can be interpreted in a provider specific manner. For example, the URIs for pksc11 can be transformed by the provider into an according pksc11 key handle. The application must know, the application using openSSL must know where to load the key from and then insert it into the key store. Well, for symmetric keys as I say they must fit in bit strings of their key lengths, that works best for plain text keys. It may work for encrypted keys depending on the padding. So, encrypting an AS192 key might be a challenge because, well, it's a multiple of block size and padding will have a required thing. So, such a key will be at least 256 bits long. OpenSSL variables for symmetric key therefore cannot refer to pksc11 secure keys. Now, let's look at the different key types and how we can handle them. So, for RSA keys they can be referred to by the URI format that I mentioned. The key generation is not needed in the TLS protocol. That's good. The provider export, well, export of private pksc11 keys must be disallowed because, well, a private key is an HSM key and I cannot somehow transform its value and provide it, for example, to a FIPS provider for OpenSSL. So, that wouldn't be a good thing. But the good news is it's not needed in the TLS protocol. The crypto function that the provider must support is signing. That's okay. And it's key exchange, but this key exchange must be implemented using the pksc11 encrypt and decrypt functions not with the web function. And it tells you that you need to exchange plain text keys, not HSM keys. What you do exchange in the TLS protocols are symmetric keys and as you saw before, these symmetric keys are in a class that are considered lower risk, less valuable if you so want. So, with RSA we more or less have, RSA seems to be a good key type that a provider can handle. The EC keys cause a problem and the problem is actually the way OpenSSL handles EC key types. It doesn't distinguish between EC signing keys, ECDSA keys if you so want, and key derivation keys, ECDH keys. Just EC keys and a provider that handles the EC key must, according to the specification of providers, provide all the functions. Again, although these keys can be defined using an URI or can be referred to using an URI, key generation is not needed for signing keys. Key generation is needed for the ECDH keys. So somehow our provider would have to generate an ECDH key. Provider export of private keys must be disallowed. Luckily it's not needed, hopefully. The crypto functions that we require would be signing and again key exchange is, well, in particular in TLS 1.3, ECDH is the only key exchange that is supported by the latest protocol. We cannot use the HSMC derived function because that function, when given a sensitive key, would generate a sensitive key, a sensitive key would be an HSM object which couldn't be used by OpenSSL. Okay. Here we have a problem that we have to work on. That is the dilemma. EC key generation must be implemented by the provider because it's used by the TLS protocol. We cannot just say we don't use it and then if you have an EC key, it should go and we want to use the provider, it should go there. The derivation should be implemented by the provider and it may not be, on the other hand, it may not be computed on the HSM due to the reason I mentioned before. Now we could think that we can export it and give it to another provider. That doesn't work either because the ECDH key, if it were generated on the HSM, then it's a protected key that we cannot just export because that would make the HSM protection fail. There are ways to export them if you set the attributes accordingly, but there are hacks and, again, these things just tear down all the HSM protection. You shouldn't do that. So we cannot export it. So we must not use it. So what do we do? The solution to the dilemma is to work with two different key spaces. One key space for the signing keys and another key space for the other, for ephemeral keys, I must say, because with RSA the tolerance is a little bit different. Key generation always generates keys of the second key space of the ephemeral keys. Keys of the key space number one must be there as a given. There must be generated outside of the protocol out of that of the system. You go to your HSM saying generate key with whatever tool is your preferred one, be it a P11 kit, be it a cook-to-key P11 sack, or whatever, you generate a key and configure your TLS server. You don't do it inside your open SSL, inside your program using the open SSL interface. You can do it inside your application talking to PKCS11 directly. So the signing operations are implemented in the provider for key space one. The derivation operations, key derivation operations are implemented for ephemeral keys by forwarding by a plain text implementation. So you may kind of forward the implementation to another provider, for example. Same for the key space two. These are generated but not generated on an HSM. If you generate a key from key spaces but you generate a plain text key for those. So application, so we have a few restrictions with that. The application must not generate keys of key space one using the provider but outside of the provider. The application must not use signing keys for Diffie-Hellman operations because the signing keys, as we know, they come from the one key space and keys using Diffie-Hellman must be in the second key space. And keys that are used in Diffie-Hellman-like key derivations, they must be ephemeral. And these ephemeral keys must not be HSM protected. Not all applications fulfill these restrictions but a typical security protocol does, in particular TLS. There are a few things that make working with PKC is 11 hard. So typically what I haven't introduced the term token yet, a token is the representation of an HSM in the HSM in the PKC is 11 standard. And in order to use with a token, you typically need a pin. And so the application must be configured to learn which token is used or which slot is used. Tokens are associated with slots. The slot would be a small number. And the token and the pin is some pin. You have to provide the pin to use a token. So somehow you have to provide the pin to the program. We choose for our provider that the pin is included in the URI. The RFC, the standard allows this. So the pin representation includes the URI or the pass to a file that includes the URI. Clearly you do not want to store, have your application expect to store the URI somewhere with a pin included. So maybe your application has to kind of edit the URI that you load from your file system and includes a pin somehow after it has been provided interactively or however, by some secure means. Then PKC is 11, keys are represented by handles. And handles are specific to sessions. So there is, and a session is a computation context in PKC is 11 for a cryptographic operation that goes over multiple steps or hash over multiple steps. So each new session, according to the standard, not according to all implementations, but according to the standard, if you want to use a key, you have to find the key anew and get a new handle for the key. Sessions may not be shared between processes. And whenever you start a new process, you have to call C initialize in order to initialize your PKC is 11 system. That is a little obstacle when your application is doing lots of forks, for example. So we have implemented exactly a provider that follows the constraints that we have. We called it a sign provider because it does only the signing part of PKC is 11. And it does everything else, the key space key. So it works on key space one with the HSM. And it works on key space two with another means. Since we didn't want to duplicate code, we used some, if you want tags to actually call code from other providers of OpenSSL from the default providers. That it works for with RSA and ECDH. It has been tested. Well, some in the audience know that we maintain OpenSource PKC is 11 implementation. So we have tested this with OpenCorp Toki and different tokens. Some of which are real HSMs. Others are software tokens. We use OpenSSL as server and as client to prove that things work. And we also made Apache work. However, we had to do a little bit with a few, Apache needs a few patches. It doesn't yet know to deal with the URI scheme that we use. So we have to add this. And we ran it in debug mode minus X because currently our provider has a restriction that it doesn't do the C initialize write if the application that calls OpenSSL is forked. We know what to do, but due to resource constraints, we didn't have yet the time to do it. So in order to run Apache in a mode that doesn't fork, we use the minus X option of Apache. By the way, we even have a little movie of what you have to do to configure the provider correctly. Okay. That is actually what I just said. We forward the Diffie-Hellman requests to the other default providers. And that way an application, if using signature operation, it's going to the PKC11 token and if the application is using derive or verify operations, it's going to the forwarding provider. By the way, we did all the, we didn't send the verify operations or the encrypt operations to the HSM. They're not security relevant. They're using public keys and we decided to also forward them. It's good for performance reasons because then you can use your local implementation rather than doing the IOS. Pin handling, I said we allow in the URI to either include the pin directly or to include a pass to the pin. The piping option for that pin is not supported. There are a few more HSM PKC11 providers or approaches. One of my colleague who uses it in a tool that allows HSM protected keys to be used with DM encrypt and when communicating to a key management server, but that provider talks a native HSM language, not PKC11. And then there's another project currently, I think it's led by SimuSauce from Red Hat. They're working on a generic PKC11 provider. They are not going with the restriction. Well, we are satisfied if we can at least do the signing part with the HSM. And they are running, of course, in a few challenges. Actually, that provider in some tests seems to work even so that hasn't really solved the ECDH problem in the way we did. And we think it's due to some luck in the provider selection for the keys that is presented. If you do not fix the provider priority, currently all crypto is done in providers, by default in the default provider, of course, then you're not sure which provider is used. And then it seems like if you're just using default settings, the ECDH somehow is, at least in our experiment, was routed to the default provider, which is just fine, but doesn't really follow the provider documentation. Okay, so as I mentioned, there are a few differences, seemingly, between the provider specification and the provider framework. It looks like missing crypto functions are possibly automatically forwarded to the default provider. That is something that at least wasn't the case when we started out with our provider work, so that might have changed during the lifetime of OpenSSL 3.0. And, well, each implementation, even if the spec says it's not defined what's happened, the implementation, of course, must be deterministic. There are a few suggestions to the OpenSSL provider work from our team, in particular to bind the provider to a key store, so that only if keys are taken from the key store, the provider is used and all the other keys are handed by the default provider. And another requirement would be not to require to implement all functions that a key can be used with inside the provider, so that it's just a partial data type, if you so want. Okay, now, I mentioned the project from CMOSauce. There may be reasons why you want to have a generic PKCS11 provider, but you should very well think of what that reason could be. I even have a reason in mind why I would want to have one, but I think one thing is not a reason to have a generic PKCS11 provider. If you just like the OpenSSL API better than PKCS11, that's a bad reason. Because, well, then use OpenSSL, if you want to have an HSM, and you're not bound by other reasons to use OpenSSL, then please use PKCS11. It's tailored to tell the HSM what it wants to do. The OpenSSL API is just not as of today. So a good reason to use a PKCS11 provider is if you have an application and you have certain aspects of your application, an application that uses OpenSSL works fine with clear keys, and you want to just configure it or modestly modify it in order to use an HSM. That is, for me, a valid reason to use a PKCS11 provider, not because I like one interface better than the other because both APIs, both interfaces, have their merit and they're made that way to do what they are meant to do. Okay. What's really missing is support for symmetric keys and what is needed in order to let OpenSSL support symmetric keys is some opaque object concept for symmetric keys. There was a proposal some years ago from Nicolas Tuveri who actually worked with us on that project, and only recently I was pointed to a discussion where this is being worked on, and I think that is great news. It looks like the community is understanding the problems now and hopefully in the far future we have a solution. And with that I want to end my talk. I think it's, and conclude, it's possible to implement a PKCS11 provider that at least protects your most valuable TLS keys. The provider framework is nice, but it has some weird aspects that we might want to work on in order to do this in a more natural way. I think the hook of just forwarding things, even Holger who implemented it thinks it's ugly, he doesn't like it, but he liked it more than duplicating code at least. That's why I did it, and yes there are a few reasons why generic PKCS11 provider makes sense and some things need to change in OpenSSL to make this happen. Under that I'm open to questions. No, we didn't pass it with you, but I think the presentation contains the link. Feel free to do it, and we would be very interested in learning about your experience. If you find facts, please report them. Let's try this one's working. Hello. Maybe you would need to repeat the question for the virtual audience. This one's not working, I guess. Actually I have two questions. The first one is about the usage of the engine in the pre OpenSSL3 area. What was their difference in terms of PKCS11 that was supported? That you could have just used it in the whole TLS communication where you can have URIs and then you just pass it over into the OpenSSL and then create the server. And the second question would be what is the problem with wrapping and unwrapping when we talk about the PKCS11? You just briefly touched this, but it's in the operating room. First topic, what was there before? There were providers, there were engines, and actually there was a PKCS11 engine from the OpenSSL project. I do not know all... Should I go here? I do not know all the features of this engine, but it worked at least as a signing provider. That worked fine, we tested this. And I think it had the same problem that the engine didn't see initialize on fork. And the second question was about encrypt-decrypt versus wrap-unwrap. Okay, these are now PKCS11 terms. So encrypt-encrypt-stata and decrypt-decrypt-stata. So that is easy. You can consider a key as being data and then encrypt and decrypt the key material. That is what our signed provider does when using RSA as a key exchange mechanism. It takes the value of a plain text key, encrypts it, or well, actually the provider would decrypt it on the HSM because the encryption operation is done with the public key. Now, PKCS11 has an operation that is called wrap-unwrap. Let's consider the wrap function is an export function. So you have a key handle of a key that is protected by the HSM. You provide that key handle together with an unwrapping key and the HSM will return the plain text value of the key to be unwrapped encrypted with the wrapping key. The idea is that another HSM also has access to the wrapping key either to the according private wrapping key or before the wrapping key was a symmetric wrapping key that was negotiated between the two HSMs and it can now use an unwrap operation to import the key. So you give it this encrypted string and the unwrap key, the key handle to the unwrap key and the HSM does the unwrapping. Now, you can of course generate your RSA wrap unwrap key pair using SSL and export an HSM protected key into a clear key. That is the trick I mentioned. Because you know the private key, you can do it in the clear but that is just what you shouldn't do with an HSM. If your HSM key is flagged as non-extractable so ckextractable equals c underscore false then the wrapping operation isn't possible. There's also a flag that you can only wrap with trusted and you must somehow have your operator flag your keys as trusted to restrict that attack. So your HSM might be configured to not allow it and if you do it you just work against the spirit of an HSM. Did I answer your question? You said at one point that you have ways to extract private keys or secrets from the HSM but it's very slow. It sounds like you have some information leak there. How would you do that? It's slightly off topic I suppose but do you have a back door? Can you repeat your question? I only have half of it. So you said at some slide you said that you have ways to extract the private key from the HSM but you don't want to do that. I just mentioned with this RSA key from OpenSSL if that is this operation. You shouldn't do that and if you're careful you configure your HSM and your keys to not allow it. Any other questions? I think we are ready for lunch. Yes lunch is next. We'll reconvene at 2.10.