 So in this session I'll present our work on anonymous attestation with subverted TPMs, which is joined effort with the unconvention on your layman So if you if you look at a computer nowadays and typically it has a TPM embedded in it a trusted platform module This is a a temper resistant piece of hardware designed to Create secure cryptographic keys and store them in a secure manner and use them in a secure manner and what it can also do is It can observe the state of the host system. So the laptop in which the TPM is embedded and An example of this is if the laptop starts up during the boot sequence Then the TPM can observe which software is being loaded onto the TPM To end up with sort of a report of which software the the laptop started and Now it might be interesting to perform remote attestation using this So the TPM will convince a remote verifier that the the laptop Started some certain good software. So it's not running malware, but it's running whatever we expect. It's running For example, if we look at a corporate network You might want to let laptops first attest that they're running secure software before you let them on to the network So this process is called remote attestation and typically this is a two phase process first we need to do some Some registration step the joint phase In which the the platform so the host and the TPM together talk to some issuer that this is some authority in the system To obtain some one-time setup it obtains some membership credential After that with the platform can sign attestation so it can give a verifier some some cryptographic proof that That's some TPM measurement is correct So in our example of the secure boot sequence the TPM can convince a remote verifier that the laptop started the correct software So we can do this with standard signatures and standard x5 and uncredentials But then there will be one problem and that is that you're linkable. So whenever you Send such an attestation to to a verifier He will see the TPM's identity and it's you can you can follow you around So if you if you do these attestations too many different Verifiers then your privacy will be lost so to to to prevent this people came up with direct anonymous attestation or DAA and This works the same as the remote attestation that I described earlier, but now the attestations are anonymous So they don't reveal anything about the platform in question so this direct anonymous attestation was introduced in 2004 by Rick Alcomone and Shen and That was made for the TPM 1.2 standard so and during the time there was also some privacy concerns about about the The effect of putting some some chip in your computer that can can see what you're doing So to address those concerns DAA was also introduced and included in that standard Then later the TPM 2.0 standard came around and that's included support for different anonymous attestation schemes But it still supported this and since then this has been standardized by ISO and hundreds of millions of TPM's have been sold So this is quite a large deployment of a cryptographic scheme And there's also interest from other angles For example the Fido Alliance. This is an industry Alliance that's trying to standardize passwordless authentication and They use anonymous attestation to attest that a certain cryptographic key is securely stored and Also, Intel's SGX their trusted execution environment is using a variation on DAA to As their remote attestation mechanism and Finally you can see DAA as a special form of anonymous credentials where you have a security device that handles your key So this is relevant for for many different different areas So let's look at the security of DAA This is a sort of signature scheme. So of course we need a form of unforgeability and in particular we We consider a corrupt host so a corrupt laptop that can talk to a to an honest TPM embedded in it and it can Make many attestations on messages And then we want to prevent it from coming up by itself on an attestation of a new message Which a TPM never approved So in our example about trusted boot this means that if we have a corrupt laptop running the wrong software It cannot come up with an attestation. That's in fact running the right software and second. We need anonymity this is why we why we're using DAA and the anonymity requirement that we have is that the verifier cannot see Given an attestation does not know to whom it belongs or more precisely if you give it two different attestations It cannot even tell if it's from the same platform or from two different platforms And in this this we want this property to hold even if the issue were so this this authority in the scheme is corrupt So even if he tries to give us bad credentials and is colluding with the verifier, even then we want the anonymity to hold So that's very good, but there's one surprising thing here And that is that in this in the in all the definitions of anonymity We trust the TPM. We trust that the TPM follows the protocol and this is this is not what you would expect because one of the reasons to introduce the a was to address the privacy concerns of some chip people don't trust this chip and but now the notion of privacy requires us to trust that chip in the first place and And in fact also recent Revelations have shown that it's very naive to trust that some piece of hardware running some crypto to trust that for your privacy This is quite naive and it's not what you would expect from a security definition and So in in different fields in cartography people looked at sort of subversion resilience or what what what security can I have if I run the wrong algorithms, and we want to do something here in the same in the same direction So so in this dog, we're going to look at Can we do anonymous attestation where we have privacy even if the TPM is corrupt? Or even if the TPM is not following protocol, so let's look at the existing security definition of the AI The most recent definition is in the UC framework. So it's an ideal functionality and I'll show you how signing works and how it guarantees anonymity So the host starts to sign a message so he gives the ideal functionality the instruction that he wants to sign a message M Then the TPM must approve the message and if the TPM approves Then the functionality will perform some checks first it sees whether this platform has has performed the registration steps that that I talked about earlier and then it will output a signature and If the TPM is honest it needs to output an anonymous signature because this is how we define the anonymity here and To make an anonymous signature we use local computation in the functionality So the functionality has some algorithms embedded in it to compute something that looks like a signature and now how we guarantee that what the signature that we output is anonymous is by Only giving the message to the algorithm We don't give anything that depends on the identity of the platform to this algorithm And that means that the resulting value cannot depend on the identity of the platform So that means the signature will be this distributed equally for Regardless of who who is signing at this point and that guarantees anonymity and of course the verifier can use it Can use the functionality to verify and that's where we guarantee inforage ability, but I won't go into detail there Okay, so this this is the existing existing functionality so this guarantees an anonymous signature if the TPM is honest now we want to strengthen this to come up with a functionality that Also guarantees is when the TPM is corrupt so in first The first good guess is to We had this check that we only output an anonymous signature if the TPM is honest now We're going to do that even if the TPM is corrupt But this is not enough yet Because if we consider the security model that we're that we're trying to achieve that means that everybody is corrupt except the host computer and Now corruptions are typically modeled in the way that if a party is corrupt that means that the adversary one central adversary controls all the corrupt parties and you see on the left that that that adversary Who is who has corrupted the TPM sees which which message a certain TPM is signing So it knows exactly which messages the TPM has signed And now if that if that message if we're not all signing the same message Well, if that message is somewhat unique then of course if the verifier if a corrupt verifier then sees a signature on that message It knows that it was me So we don't have privacy unless we're all signing the same message so We could we could try to prevent the adversary so we could Prevent the adversary from seeing which messages the TPM is signing by not giving the message to the to the TPM But this prevents us from having a meaningful definition of Unforgibility because remember remember in inforgibility. We need the TPM to approve the signing of messages So this does not work and we cannot we cannot realize a meaningful definition of privacy here But in fact the corruption that we're modeling here is very strong here. We model a TPM which is Controlled by a central adversary however in the attack that we more envision is that a TPM is running bad algorithms or taking bad randomness But that is still a local piece of hardware in my laptop So we need to refine our corruption model So now we put the adversary in the TPM in a jail cell. We limit his capabilities here The UC framework allows us to to define fine grained Corruption models and what we do here is that we say that the adversary can define bad behavior for the TPM But it's limited to to that. It's not controlled by one central adversary We can do this using this body shell paradigm that they use and This way the TPM even though it approves messages and can have bad influence there It is not controlled by the central adversary. So the central adversary does not see every message that we're signing And we think this is optimal privacy. This is the strongest privacy model that we could hope for So we want to achieve this level of privacy and in the real world We were then in the same situation. So the TPM is a local corrupt algorithm in my computer But it's not controlled by the central adversary where the verifier and the issuer are still colluding and corrupt So this is our this is our new security model that we want to achieve now We have to look at how we can achieve this using protocols So first, let's look at existing protocols and how far we are and all existing protocols use the same the same approach of Use the same common approach That is that the TPM holds a secret key This is the only key that the the platform has and during this this setup phase during the join phase They authenticate to the issuer and the issuer places a signature on a commitment to the TPM key This is called the membership credential After that we can make such attestations and attestation is a zero-knowledge proof Proving that a message was signed with a TPM key that is certified by the issuer so This is what all all existing schemes follow and the differences are in which signature scheme you use to make such a Credential or how you instantiate the zero-knowledge proof, but other than that, they're all the same And unfortunately, none of them are good enough to realize our notion of privacy For two reasons actually The first reason is that this zero-knowledge proof that is the attestation is a statement about the TPM secret key meaning that The TPM and the host must together make the zero-knowledge proof the host cannot do it by himself because he doesn't know this key and That means that if we have a corrupt TPM He might give some malicious contribution to the zero-knowledge proof Rendering the whole zero-knowledge proof no longer honestly generated. So we cannot claim any zero-knowledge about this proof And the second is that the key all the key material is Stored by the TPM and if the TPM is malicious that means that we have no good key material left and Again that that means we cannot have the anonymity properties that we want to have So we come up with a new approach where we address exactly these two concerns So the first change is that we no longer only have a TPM secret key We also have a TPM corresponding public key and instead of signing the secret key in this membership credential We put the public key in there And that means that the zero-knowledge statement. That's that we prove which is the attestation Is no longer a statement about the TPM secret key But about the TPM public key and the host knows that so the host can create this full zero-knowledge proof And because the host is honest we can we know that we actually have created a proper zero-knowledge proof The second change is that is that we split the key of the We no longer only have a key of a TPM, but we split that into two parts the host and the TPM together create the key That means that even if the TPM is malicious And creates a bad key the host adds enough good key to to come up with a with a proper key of the platform all together So this this new approach we show that we can with this approach we show that we can realize the The level of privacy that we that we previously defined And I'll give you a bit more detail here So we specify Three building blocks that we need To reflect the the picture I showed in previous slide And with those three building blocks if we give secure instantiations those give us a secure DAA scheme The first thing is is a split signature This is very similar to multi signatures or client server signatures and allows the TPM and the host to to make a signature together With their individual key shares So this is similar to existing existing notions of multi signatures, but we need some extra properties The first is that a Signature should not reveal anything about the public key under which it's valid because that would destroy the privacy that we're looking for And the second is we need some uniqueness properties We need that the signature is unique for a certain key in a message and that Given a signature only one key given a signature in a message it can only be valid under one key and these uniqueness properties Limits the way in In which an adversary can have malicious influence because there's not so many choices He can make this makes it easier to prove the anonymity that we want to have And in fact we can we present that we can that we can efficiently instantiate this based on BLS signatures Second we use signatures on encrypted messages to form the credential of the of the platform And here we present an efficient instantiation based on a GOT structure preserving signatures on Algamot ciphertext And finally we need the zero knowledge proofs or zero knowledge proofs of knowledge to glue everything together and one efficient instantiation is based on Schnorr proofs possibly with the CRS trapdoor and So we show that any secure instantiation of these building blocks that fulfill the properties that we need yields a secure DEA protocol But if you use the efficient instantiation that we propose Then we actually get a very efficient and practical DEA protocol Signing only takes nine explanations and ten pairings for the host which can run we run in tens of milliseconds and More importantly at a TPM only has to compute two explanations to make an attestation and Of course here. It's important to realize that the TPM is orders of magnitude slower than the host computer So you want to minimize the workload for the TPM and this will have the greatest influence on the efficiency of the DEA scheme And actually this is the most efficient DEA protocol In in terms of the DEA of the TPM signing operation that there is so far. So that makes this actually very practical and Verification only takes four explanations and eight pairings So that makes this this that shows that we can very efficiently do this So to wrap up we show that That the anonymity definitions that we have used over the last decade for DEA are not what we expected to be Even though the point was to Reduce trust in the TPM. We actually still needed it to be to be honest for any anonymity and now we showed Or now we define that We define DEA with optimal privacy, so we define the best privacy we can hope for in the form of a UC functionality and We show how to model such a subversion attack on a TPM in this framework Then we we define a new approach to DEA protocols Because the existing schemes are not sufficient to realize this notion of privacy we define a new approach That we can use to realize this That we can really realize our ideal functionality and We give a very efficient protocol that a very efficient concrete instantiation That can actually be used in practice With that I would like to thank you for your attention and I'm happy to take questions