 Yeah, so this talk is actually about some work that started in the academic world. We wanted to make some, or looked at how to make DAA on TPM to probably secure. Then we thought, since we looked at a real world crypto scheme, it's actually, it would be nice if we can now get the probably secure version back into the real world. And this talk is basically about the journey that we took with this work. So let's start with this direct anonymous attestation and TPM actually is. So TPM stands for Trusted Platform Module, and it's a secure crypto processor, some small piece of hardware that can securely create and use cryptographic keys. And the idea of the TPM is that it's embedded in a larger system called the host. It could be a computer or a smartphone, and is meant to monitor and measure the host. So it can, for instance, make attestations then of what software is running on the host. Is the host really using keys that are stored on the TPM? And the idea is that the TPM then serves as a root of trust. So you cannot secure the entire phone, but if you have some trusted chip that tells you that everything on the phone is actually okay, you somehow convince that the entire phone actually does now. Things in a secure manner as well. And while you can make those hardware-based attestations with standard certificates and standard signatures, just let the TPM sign whatever its measures from the host. The problem with that would be that then all the hardware-based attestations would be fully linkable and reveal the identity of the TPM. And that actually would be really bad for privacy because all of your, yeah, all the time that you want to use hardware-based attestation, you're linkable and all the things can be put together. So actually that kind of raised some loud concerns of the privacy community. And the good thing is here that this time the concerns were actually heard and acknowledged. And so what then the TCG, that it has a computing group that standardized those TPM does is they went for a protocol that is called Direct Anonymous Attestation and was developed exactly for the purpose. And this DA protocol basically makes those attestations but in an anonymous way. So the verifier is still convinced that it sees an attestation from a certified TPM but without learning which concrete TPM has made the attestation. And the attestations are really fully unlinkable unless the host decides otherwise. So the host can make those attestations or signatures with respect to pseudonym. And he can choose to reuse the pseudonym but by default they will be fully unlinkable. Okay, and the first protocol, as I said, was actually done as a joint cooperation of privacy group, cryptographers and security experts when these TPM chips came out. The first protocol was then proposed by Brickle, Kaminich and Shen in 2004 and was adopted by the TCG and standardized and the TPM wanted two specifications also later on in ISO. And the first scheme is based on RSA but in the newer version and the TPM 2.0 specification they went to elliptic curve and pairing based schemes for efficiency. And the newer TPM interfaces actually also have a really nice design whereas the first TPM spec was really hard-coded for that one particular protocol, the new one offers very flexible APIs and actually supports many different DA protocols. And some of the protocols also have been ISO standardized. And this is not just some theoretical concept and some kind of specs that nobody uses but actually TPMs or chips with such TPM support have been sold more than 500 million times. So it's really one of the most kind of complicated protocols with your knowledge proofs and everything that is quite widespread in the real world. And I would say that the interest is actually kind of growing nowadays because we see more and more applications what you want to do, for instance, secure authentication with mobile devices or secure measurements with IoT things where it's hard to, as I said, secure the entire device but you can put some trusted chip in there and then bootstrap the security from there. And in fact also the most kind of recent advances in trusted hardware, the SGX platform from Intel also uses a DA variant which is called enhanced privacy ID or EPID for the defective standard of authentication of SGX enabled devices. Okay, so that is a very kind of short intro what this DA and TPM is. And as we heard a couple of times at the conference already it's nice if those things in the real world are actually provably secure. And that means that we should have a formal security model that really defines what the adversary's capabilities are and what he's supposed to do or not supposed to do, for instance, what is actually unfortunately anonymity linkable and linkability formally mean. And then if we have such a security model the scheme, the cryptographic scheme should be proven secure in that model because it tells us that there's no attack strategy in the model. And then finally, because we wanna have real world security we should have a secure implementation of the protocol that has been proven secure. Okay, so let's see where we are in this roadmap now 12 years after the DA was invented. And actually it had a pretty good start because as I said that was developed by cryptographers from the beginning so they made protocol coming together with the formal security model and they proved the scheme secure in the model. However, the scheme was not, the formal model was not kind of complete enough that it was missing some features and functionality. In particular, it was missing the capability to actually output signatures to the verifier which might sound a bit weird, but basically verification was modeled as an interactive process. But that means that in practice you are not allowed to work with signatures, you're not allowed to store them and to maybe re-variate them, send them forward because then you step outside of the security model and the approval security guarantees will not hold anymore. So that was kind of a significant limitation and in the follow-up paper by Chen Marseille and Smart they tried to overcome the limitation by, in that, so it's an ideal world definition actually in the ideal functionality then they output concrete signature values to the verifier. Unfortunately this was done in a too simplistic way. Namely they modeled signatures as being truly random values to model that signatures are not allowed to reveal anything about the signer's identity. But that is in fact way too strong because a signature is not a random value if you have a key you can distinguish it clearly from a random value. So actually this model was now way too strong, it could not be realized by any construction. But actually dealing with cryptographic values and the assimilation-based definitions is some inherent kind of struggle. So when the work on formal models for DAA continued they went to the more established game-based definitions. But unfortunately the first definition in that world was having the exact opposite problem. It's too weak. It allows totally forgible schemes to be proven secure. And that means that even if you have now security proof in that model it doesn't mean anything because you can also prove a broken scheme to be secure. And then there was some other follow-up work where it added some extension and also used it for instance for the enhanced privacy ID scheme for SGX but they were all built on the initial scheme and the initial model and that means they all inherited this unforgibility flaw. Then there was a paper by Bernard Allaware where they discuss all these flaws and the existing models and try now to kind of solve it by presenting a very extensive set of different properties, I think eight or nine properties in total. But unfortunately they did that not for real DAA but for pre-DAA. And in pre-DAA the TPM and the host are considered to be one party. And that means that they share the same corruption status so either they're both honest or they're both corrupt. But that means it does not capture the most interesting case namely that you have a corrupt host with an honest TPM when then you want to bootstrap security but you have to assume that both are honest which kind of renders the use of TPM basically useless. So interestingly despite being DAA there for 12 years being 500 million ships and there's quite a line of work on formal security there was not a single security definition that actually was achievable and defined the security properties that we wanted. So we gave it a shot and hopefully got it correct this time. We went for a security model in the UC framework. I will not go into any details but we did model TPM and host as separate parties. And given that we are in the ideal world we also had to output signatures as concrete values to make it usable in practice. And we did it by not modeling the signature as random but creating signatures for random keys because by that you can prove that it doesn't get any information about the identity as input and cannot have anything in the output. So that's the idea of the model. Okay so finally we can make a check mark after step one we have a security model now. And now the task is to find a cryptographic protocol that is probably secure in that model. And the DBA protocols actually all work kind of with a similar structure. The TPM first generates a secret attestation key and then gets a blind signature on that key by the issuer, it could be TPM manufacturer or some other entity. And it's crucial of course that the issuer does the signature or the membership credential in the blind way. And then if the TPM wants to make some attestations it basically proves that it signs a message with the secret key and it holds a membership credential on that key. But it does it in a zero knowledge way so without leaking any information about the key or the credential that it holds on that key. And the different DBA protocols and mainly differ on the signature scheme that is used by the issuer to give you that blind membership credential on the blind key. And the most recent TPM specifications said uses protocols that are based on elliptic curves and pairings. I said the TPI offers very generic APIs so actually there are a lot of different protocols out there that kind of fits the structure. And the work can be basically split in two lines of work and one is based on Kamenshezianskaya signatures or on the all LRSW assumption or on BBS plus signatures and the QSTH assumption. So what we did and some of them also kind of iso-standardized, I mentioned there. What we did is then we looked of course at these protocols and tried to find out if we can prove them secure in our formal model. It turns out not. They're actually kind of really insecure or they cannot be proven secure by some inherent issues. In particular the scheme that is iso-standardized and based on LRSW has a kind of a tiny flaw. It allows a very trivial credential that is valid for any attestation key to be accepted by a verifier. So that's pretty bad. So what we then did is we proposed also two new protocols based on LRSW and QSTH and prove them secure in our model. And they're basically as efficient as existing ones because they are pretty much, they're very similar to the previous schemes but we only had to fix details at some few checks, at some extra proofs and added a few elements but it doesn't kind of impact the efficiency and a big deal. Okay, but that's good now, right? We have now finally also a secure protocol for DAA that is proven to be secure. We just need now a secure implementation of it. And as I mentioned, our protocol is efficient. So in particular actually the TPM part was really lightweight. We kind of make sure that this is the way we do it and then we should be done, right? You just implement it and we have a secure implementation. And with that mindset, we then got in contact with the TCG, the Tastered Computing Group that standardizes those TPMs and ask them, okay, that would be actually really nice if they could now implement or support our proven secure versions of the DAA. When there was the moment when we realized that our understanding of the real world and the real real world are not actually fully the same. Because what we did and also what all the previous work in DAA did is they simply treated the TPM as a lightweight device. So you can, it shouldn't do anything kind of really hard but you can put any code on it as long as it doesn't kind of require a lot of computations. But in the real, real world, the TPM is a piece of hardware that you can only access by a few and limited APIs. So in order to be supported by the TPM it actually should be able to work with those APIs that are standardized. Okay, so then we had a look on how those TPM interfaces actually look and that's a very high level, very simplistic kind of representation of the APIs. But the basic idea is that they can generate arbitrary signature proofs of knowledge and in a proof of knowledge you have the prover, the TPM of that case holds some secret key and I want to prove that it really has a secret key. And the way it does it is first sends a random value, G to the R to the verifier, then it gets a challenge for an unknown interactive signature proof of knowledge. It will hash the T values of the random element from before and the message that is supposed to be signed and then compute the final value, the S, which is R plus C times the secret key and then the verifier can check if this is really correct. And it's known that those proofs are unfortable and zero knowledge. And the TPM interfaces basically kind of give you an interface for all the different steps in the protocol. So you can create keys, you have a commit interface that gives you the some G to the R, you have a hash where you can get the challenge and finally have a sign interface where you can give this input a hash and the pointer to the randomness of the first step and you can get at output the S value. And you can already see a difference in those two things and that's actually where the power of the API comes from namely in the commit interface you can get a give an arbitrary point to generate this input and you can get those proofs for arbitrary group elements. I said that's where the flexibility comes from and unfortunately our protocols or unfortunately they are not compatible with those interfaces so they cannot be bootstrapped from that. But actually for a pretty good reason, namely our protocols were designed in a way that they do not require the TPM to be a static TPM on Oracle as the previous protocols did. But because the earlier protocols required static TPM on Oracle, TPM is one. And we now see why this is actually kind of a bad idea. The static TPM on Oracle means that you can get the TPM to compute values P to the SK for arbitrary points P and it's not kind of very immediate by one interface call but you can make three calls and compute it as shown on the slide. And it's really for arbitrary points. And the problem with that is that now corrupt host can actually make these points in a special way so it can first call P for P being G to the SK. Then it will get G to the SK to the SK. Then it takes that point again and issues that as a point P and then it will get G to the SK to the SK to the SK and on. So it will kind of get more and more kind of these tuples. And it was shown that if you have a long sequence of those values, actually computing the discrete logarithm is much simpler than if you would just have the value G to the SK. It can in fact reduce the security for a 256 bit curve from 128 to 85 bit security. So these interfaces that give you such a static Diffie-Hellman oracle kind of a really bad design and should be fixed anyway. So then the TCG asks us, okay, now if we have Google secure protocols that do not require the TPM to be a static Diffie-Hellman oracle anymore, how should they then look like? And then to answer the question we actually built up and worked by Seattle who have shown that in the DA protocols, the generators often have a very specific form and leave out the form of P equals G to the Y. Now the discrete logarithm is known to the issuer. That's the secret key of the issuer. And the good thing is if the point P has such a format cannot be used for these kind of attack things that I've shown you earlier. So if we just can be sure that the TPM only receives points of that structure, we can actually securely use those points. And then the idea is to let the issue prove that he has chosen P to that particular form and that's your knowledge proof. So basically he can compute P to the SK himself by simply taking the public key of the TPM and raising it to the discrete logarithm of the point that he wants to use now in the zero-knowledge proof. And the suggestion by Xied Al is to then use one new interface, a bind interface that takes the generated P and then also K and the zero-knowledge proof that allows the TPM to verify that P is really chosen in that particular form. And then the PPM is opposed to store the P as a clear point that the proof is correct and only use in the commit interface those clear points. And yeah, that works for this at all points that have this particular structure, but and that is sufficient for a basic DAA, but as soon as you wanna do some more extensions, so in particular signature-based revocation, then the streak wouldn't work because what we have in signature-based revocation is that the TPM has to get or will get random points as input that were generated by other users and it gets it at the moment of attestation. So when he gets it, there is no one there who can make those proofs anymore because they were not generated by the one who wants to see the proof. But as for that particular protocol, the point has to be just random. There's actually kind of a simpler fix. We can just instead of giving the commit interface the point as direct input, we give him some string and let the TPM derive the point as the hash of that string because also then we're sure that the P cannot be G to the SK, et cetera. So this was kind of our first proposal to the TCG that for the new revised TPM interface that that get rid of the static default hameler protocol. And we thought it's not too bad. I mean, it's just some more lines. But then we learned, well, this is actually bad because it's a major revision to the TPM. Adding new TPM commands is a big deal because not only because you have a large consortium that all have to agree on that, but also because the TPM is really bounded in terms of memory and that's really costly. And also actually storing the point P because it had to be associated to the key would be a change to the key structure which is also kind of a significant modification. But I said actually the second suggestion that we make or part of this commit to interface, that would be okay. So, but it would only be okay if without interfaces we can support everything that was there before. Because also what we actually have done now with this first proposal is we kind of gave now dedicated interfaces for every protocol which destroys the nice design of the TPM interfaces. So, the good thing was now we had some TPM interfaces to work with that did not require or that they're not impose a static Diffie-Hellman Oracle. But the bad news was that one strain of work, all the LRSW based schemes would not work with these interfaces because they heavily relied on generators that had this special structure for the issuance protocol. So, we had to go basically back to the whiteboard and started kind of the work again. And to be honest, our own surprise, we actually get it to work but we had to go quite down in the crypto layer. So, really had to change the way in these Kamenyshe-Lyanska signatures, the issue parameters are chosen and how the issuance protocol is done and basically put the values a bit upside down. But you were able to do that and also prove that scheme then secure and the standard assumptions. Okay, so then we're done, right? We have a revised TPM interface that do not require static Diffie-Hellman Oracle and we have approval of secure protocol. Well, we're not really done because the only thing that we have shown so far is that we have approval of secure protocol and we can get it to work with those interfaces but we haven't shown that actually now the instantiation with those interfaces, so the orchestration of that gives us a secure protocol. So, in particular, what still should be shown and approved with secure manner is that the TPM-based contribution, so the TPM-based signature approval of knowledge is unforgible and anonymous. And then there were kind of bad news again. The interfaces in particular, the initial interfaces that are standardized in TPM 2.0 do not allow such proofs. You cannot prove that the TPM-based SPKs are unforgible. And the problem is that in order to do so, you have to program the random oracle. So, in particular, you can only simulate the sign interface if you program the random oracle for that hash value to a special point. Yeah, it's a bit tricky. And in order to do so, you really have to know which hash is used in that final command. And here it's not possible because the host has full access to all the different APIs and can make many hash calls using particularly the same T value. So you have no clue when to program the random oracle such that you can't simulate these interfaces. The good thing is the fix that was proposed by Xe et al is actually very simple. Namely to just add a nonce and a hash on the sign interface and hash the hash again together with the nonce. Because then in the security proof, we can make sure that you can program the random oracle in the moment when the signature should be created. So, we're done now, right? Well, of course not. It's not that simple. Because now this fix by Xe et al allows you to prove affordability, but it's actually bad for privacy. Because now we introduced on a very low level a subliminal channel. This nonce and that is chosen by the TPM is a part of the final signature. And now you do not really know is the end really random or is it just looking random and maybe it's an encryption of my identity or even my secret key. And you cannot randomize it on any higher layer of the protocol. So no matter how good your protocol is, if you have the nonce chosen in that way, you'll never be sure if it cannot contain any identifying information. So we proposed a fix to the fix. It basically is that instead of using a nonce only by the TPM, we use a nonce to which the host and the TPM contributed. And it looks a bit complicated but the idea is basically just that we have to make sure that the nonces are not depending on each other. So the TPM has to commit to its nonce first before it gets the nonce from the host and also the host is not allowed to see the nonce before it chooses his one. But then we are actually really done. So with those interfaces we could prove that we cannot only just bootstrap the protocol from it but we can actually do it in a secure way. So the signature proofs of knowledge generated with these interfaces are infortable and anonymous. We do not contain any subliminal channel. Okay, so we're finally done with our roadmap or let's say we're in the progress of being done with the roadmap because of course getting those changes into the real world is a kind of a long and slow process. So some of those changes have been accepted and under review and hopefully will be accepted soon. And the next step is of course that we continue to work with the TCG to make sure that the order, hopefully make sure that the TPM interfaces are revised and we're also actually in contact with ISO and with Intel to make sure that other specifications of the protocol that are flawed are fixed as well. And we also made a kind of phyto-key attestation using DAA that yeah, I can talk offline if you're interested. And as a conclusion for how to get provost security in the real world, well first it comes in really handy if provost secure crypto and the real world are actually compatible. And it's kind of really hard to get that I think in the first step on the first version but it's good then if you follow the process into the real world because it very likely will require changes to the model or to the protocol. And if you wanna make sure that those changes are done in a provost secure way, you better be there to make those things. And ideally, of course provost security should be there from the beginning to make sure that the standards and everything is correct because now we have ISO standards that are flawed, we have TPM interfaces that have the static TPM on Oracle, et cetera. But to be honest, we actually had a lot of security proofs here from the start. It was just that the proofs were pretty often wrong or models that were not kind of complete. So I think we also have to rethink the way of how we review those provost security proofs but that's a very long discussion. And yeah, finally, of course it takes or yeah, of course, but it takes often way longer than I want to expect. And in particular, it's not such a kind of sequential type of work that make a security proof, a scheme and implement. It's actually kind of way more interactive back and forth than, yeah, in theory. Okay, but that's it. Thanks. So we have time for one question. So sorry, Nigel, we'll have to sit down. All right. I just would like to point out a bit of history in that the ISO and TCG were told that the CPS 10 thing that they standardized was insecure before they started standardizing it. Okay. In addition, TCG was told that its API did give them a static Diffie-Hellman Oracle and it would reduce the security of their things and they chose to ignore the experts. But I mean, they wanted to have, I guess, flexible or they wanted to have support for different protocols. And it was also this time that said they will only make the change for the APIs if we can show them that they can support the different protocols. So basically, without giving them the second step of the answer, they would sue. I mean, we still don't know if they will change it but we have to help them to keep it as functional as working as with the bad TPMs, basically. I'm going to claim Nigel didn't actually ask a question. Were you trying to ask something? Because I was going to cut you off. Yeah, yeah. No, I really did want to ask that one. Because we're behind schedule. Okay. So anyway, let's thank Anya again.