 Okay, thank you very much for being here and thank you very much to the confidential computing and consortium for sponsoring this, as the university did not have funds for me visiting here. So thank you very much. This is joint work with experts and the way I will define experts is that they have very profoundly scientific answers about the questions that I have and the questions I have are not trivial so people who know me will actually know that. So Thomas Rosati, one of the leaders for this who has worked a lot on this from ARM perspective and now he's at Linaro, Simon Frost and recently we have Shail Zhiyong who has worked from the formal perspective of ARM CCA artifacts. Yeah, so the talk I have organized it in the following way. That is, I will describe some motivation of why did we start going about the problem of formal verification and why is it required, the motivation behind this of analyzing it scientifically. Then our own approach that we have taken for this that is the specific tooling that we are using that is the specific kind of verification that is called symbolic that we are using and why we are using that and some results that I will highlight from our work of last four years roughly and it's mainly being done at the as Mike mentioned in the CCC attestation sick the special interest group with people from Intel, Microsoft, Google and all that part of it and the verification challenges then I will describe what were the verification challenges that we face from Intel and ARM specifically technologies and finally I will summarize the talk by saying a few words about what direction needs to be pursued specifically for this kind of research. Right, so the motivation first of all is that we have ad hoc designs. Let's face it. So the designs that are made without verification and a systematic way are not sustainable. They are going to fail at some point of time and that's why we need actually to have some former guarantees and some scientific way of analyzing these designs which are very complicated that the designer might not be aware of the corner cases. So that's why we need to have some scientific reasoning behind that and that's what one of the news. So Intel let Google hack its trusted execution environment that is the Intel TDX, one of the upcoming solutions from Intel and they found 10 bucks which were ranked among the highest ones, highly vulnerable and I want to specifically focus as the title like on one of the critical mechanisms which is the attestation mechanism which is I would argue the most important guarantees that is bringing to the user back and without attestation one cannot really imagine confidential computing. It's one of the most important building blocks of confidential computing. So what we call as the architecturally defined attestation is one of the terminologies that we specifically use here meaning that the defined by the specific technology whether it's Intel, SCX, TDX, ARM, CCA, AMD, SCV, SNP, I will have some solutions it's one that is provided by that specific vendor. So other than the solution that the software designer would build on top of that so everything provided by that technology is basically the architecturally defined attestation. This is how we terminology wise define it and so a very simplified high level overview of the thing is that we have an attester and we have a verifier so we we let's say remove the complexity of the relying party we have only two parties and now it's the attester is running on the cloud and the verifier wants to verify whether it the code and the data is correct one so that's the basic simple problem and without going into the TLS or the protocols that are used the architecturally defined attestation and then we can replicate it the idea is that if we have the core thing formalized which is missing at the moment one can then reuse that for using the solution for the relying party so that's the way to go so that's why we are focusing on the core problem which is actually this attester and the verifier how do we how does the verifier get to know that the attester is running the code and data that I wanted to have. So very simplified view that verifier will send an attestation challenge the attester will include this challenge sign it and generate an evidence of all of the software stack that is here and then that evidence or the attestation evidence it will send back to the verifier and then the verifier will conduct some appraisal of that based on a specific policy so policy can be dependent on the use case and then optionally that's why I have it in gray some different that is the optional step that at after doing this appraisal making a trustworthy decision whether the attester is genuine or not it can send some secrets or sensitive data that it might not send if that is not trustworthy so it's essentially developing the trustworthiness of the attester and the platform underneath so that's the core idea of architecturally defined attestation and that I would argue is the basic building block that has been missing since now since since years the tls block the spdm and other protocols they have been formalized but this part really has been missing and this is really what we want to focus on so having understood this core idea at a high level I want to go into a little bit of more details the attester as you saw in the last slide is as I said generating the evidence that's this part and the verifier is doing simply the appraisal of that evidence so for the generation of evidence it needs to have some identity that can be anonymous identity and that's supplied by the identity supplier one of the roles and then we have the verifier which needs us access to certain things that includes the endorsements provided by the endorser some certificate chains the pck certs in case of intel for instance some reference values against which it can compare the data coming from the evidence that does this evidence correspond to the reference values that I expected and finally policy dependent on the use case so what we want to do here was that the holistic coverage of all the phases that are included in attestation starting from the phase which is the provisioning where and that itself is of two types number one attester provisioning meaning that when an attester is provisioned with its identity that can be anonymous or non-anonymous and then the verifier provisioned with these endorsements and reference values and the policy and then the initialization steps some of the setup steps that are required in for one time or not required every time are part of these initialization step for instance getting a k cert so would be one of the examples in case of intel and then finally the attestation protocol now this is the thing which will be running every time we do attestation that and a challenge is sent and evidence is formed and so on so these will be the actual attestation protocol so what we did in our previous work or what previous works have focused on was this specific attestation protocol phase and we found that it's required to have a holistic coverage of the phases that is including the initialization phase as well as the provisioning phase so that one can see the overall picture of the whole thing right so this is not that easy so the problem with this at that that we have here a wide variety of the trusted execution environments we have intel sgx tdx amd sev s and p arm cca risk five and then ibm pf and other solutions which might emerge in this future so how do we deal with the diversity of all these solutions given that we want to have a interoperable solution for the verifier to interact with different kinds of dees and with each solution comes with its own terminology each solution comes with its own hardware specific requirements which are not always public so we have here a criteria of understanding different kind of solutions into some groups and I will describe three different granularities or criteria for understanding that the first one is isolation granularity as was in the last talk so at what point of time at what level of granularity do we this do we do that protection so it can be at process-based intel sex for instance or it can be at the vm-based intel sex being the only candidate here for process-based and all others are vm-based then what we have is that whether it's a complete product or whether it's an architectural specification architectural specification meaning that one can build various implementations based on these specifications and these implementations always meet these specifications so arm and risk five are basically in the category of architectural specifications and others are final products and the third criteria I have here is the attester composition what does it compose of whether it's layered as defined in the rfc 9334 reds architecture whether it's layered one building on top of the other or whether it's composite meaning that two or more attesters having sending one or more attester sorry sending sending an evidence to a lead attester which will then finally send out the evidence to the verifier and arm in this category has basically a composite attester and to show you a picture of what we already had before was this we worked with this intel sex category so we had both appid and decap the two mechanisms of attestation for sex already verified and the scope of this specific project which is at the ccc sig was the tdx cca and amd scv s and p for now we have for tdx and cca and that's what I will describe as you will see it's already covering a lot of the different varieties of the flavors of different teas that we have for instance it's covering the vm based here it's covering product versus architectural specifications and from here it's already covering the layered and the composite so we have basic primitives for verification on the basis of which one can build a formally verified solution so that's what we were aiming at and for instance in the as I said so this part which is missing the process based we already have in our previous work so kind of roughly at a very high level not precisely we are covering the spectrum of the things that I have described here and I will be happy to discuss if you have other things in mind because I'm not an expert in this so we have not yet explored these scientifically so I would be seeing what are the kind of differences there so that we can have even wider coverage maybe something other than that I have described here is different in these technologies right another challenge different from this of the wide variety is that we have these complicated designs right so these attestation mechanisms they are very complicated and the specifications which are which are problematic are not only vague but also updated and there is very little support for that that for instance in one of the flows it says that this is specifically for Intel TDX which says that SVNs the security version numbers as we understand is that okay so I have a security update I increment that number and then what I would like is that the one coming from the evidence is greater than or equal to what I would have as a reference value so this is the kind of general comparison that we have typically but here in the in the specification it says that it should be exactly matching the values that are coming from the evidence and that is a reference value which is itself like unusual to have this kind of thing and so what we posted it in Intel forum and then what the response that we got was okay so this is outdated this needs to be updated and this is another kind of challenge that we need to face okay so coming to the contributions having described a bit about the challenges I would like to claim like three main contributions for this work um the first one that we have the reusable verification primitives covering all phases of attestation and I defined the phases that is the provisioning phase the initialization phase and then the attestation protocol phase the two artifacts that we have publicly available now are CCA and for CCA specifically the contribution is that it's the first formal specification and analysis of attestation in the architectural specifications group and the composite attestor as it's the only one which has composite attestor secondly compared to TDX which is building upon one of our previous works with completely different set of authors side and Christopher Fetzer so there we were looking at the attestation protocol phase only so what we have here is now the initialization phase in addition to the provisioning phase a complete holistic picture of the thing and it's much more detailed as compared to that initial model which we had two years ago when there were not much details available now we have included the certificate chain and all the verified steps that are required in order to appraise that evidence as far as are available in the public documentation of Intel TDX so what we have in addition is that as I said the initialization phase and the variable measurements the second contribution in this work is the formal proof of insecurity for the Intel Scream PCB which they had in one of the white papers and what we did here is that we had substantial improvement to the specifications which Intel had and thirdly as we are in the open source submit here so we have open source artifacts available and I invite all of you to bring your expertise and contribute to that I will I will show the link in one of my slides and not a contribution for now but one of the use cases that we are using it currently but it's a work in progress another project which is also Harvard at CCC Attestation SIG is the Attested TLS project and there we are using the artifacts that we have developed from CCA in order to go to the next layer which is the transport layer and there we are using that CCA as the attester to do the fine grained reasoning on the combination of the two things so as I said the DLS artifacts are formally verified what was missing was this attestation protocols now we have both parts and as you all also know that the security is not composable you might have two different isolatedly verifiable provable security things but the composition may not always be secure so that's what we need to prove separately and that's that's what we are heading to and another use case is that we are now working with Intel Intel ask us after the around of March that can we verify the VTPM solution the virtual TPM solution for their trust domain extensions Intel TDX and that's what we are also working with Intel in order to now verify the VTPM solution how it can execute and doing the reasoning there that's also kind of analogous that is basically we have the SPDM solution and we use the TDX artifacts that we have developed here in to be used in combination with the two protocols again the same reasoning the two things might not be secure in isolation but not necessarily in combination okay so I would now describe a little bit about the approach that we have taken for the this problem that is dealing with the formal verification and I would like to go by three steps namely the model threat model and properties I will describe why is it so firstly I will give some introduction of what formal methods are I think we are no understand that the importance of doing the scientific reasoning for that and what formal methods actually do is that we have mathematical techniques and they are going to guarantee that the model satisfies some specific set of requirements that's the whole high level bit of formal methods and typically of is it it's of this form that we have a system which we want to analyze in this case we have a protocol and we want to ensure that whether it satisfies a specific property or not the system in this case is a protocol attestation protocol in the presence of an adversary a specifically formally defined adversary and we want to ensure that a specific security property is satisfied or not we want to answer this is such question and the way to go about this is that at very high level again the system is formalized as an abstract model out of the system specifications on the other hand we have these requirements and what we want to ensure is that the requirements can be formalized as a property in the formal language of the tool that we are using and the question then of verification is that does this abstract model satisfy the properties in the formal language that we have written down in mathematics so the answer would be let's say either the model satisfies these properties or otherwise not and then with a counter example of what happened basically what was the trace that led to the violation of that property and it's really important for understanding what went wrong and how we can fix this protocol to be safe secure actually okay so as you see here basically we have three kind of things and that's what I divided into three parts the protocol the adversary model and a specific property or a set of properties and that's what I will now describe the model threat model and properties as I said we have two technologies that we have analyzed and I will give some high level ideas of of both of them first one the cca composite attester it consists of two attesters as you see here the platform attester and the ren attester and each one of them has the testing environment and target environment as the testing environment and monitor security domain as the target environment and a realm attester has the rmm the realm management monitor as the attesting environment and a realm instance as the target environment the keys corresponding to each of these environments are also shown here and the question is for the verifier to check whether the combination of these two attesters the evidence coming here that is in the form of platform evidence in combination with the realm evidence is it secure or not can I trust the whole stack of of this and as well as this so in addition to arms public published specifications thanks to our arm co-authors and collaborators what we have here is a specific instantiation of the ccca attester in the tea agnostic architecture and the second thing is which is not in the arm specs is the challenge response interaction model and we have instantiated it with that so we have tried to as I said the architecture specifications one can build various solutions implementations out of all based on these specifications and what we wanted here was a flexibility for various solution implementers to use these artifacts and for that what we did is let's say we have the he s like this interaction between the he s and the rmm to be very flexible so that you can have different flavors of the solutions one example is the dedicated design in which like in the confidential computing use case if I have a verifier I would like not to have a round trip between the rmm and the he s going each time to he s to go to do the signatures I can probably save this time for the round trip and rather sign here at the rmm to avoid this but this also comes with a security lapse which is that the key assigned to the this rmm which I will describe in the next slide just some intuition is that the key is over this secure channel and if this secure channel is leaking leaking the key then we have the side channels possible and that's what this he s will protect in the in the design that we have lesser performance that is going this through this round trip doing always the signatures via he s would be that the attester would have a round trip and then it will have lesser performance and might lead to slower of course will lead to slower response as compared to the solution here. The second flexibility is that the verifier could also have different designs possible one could have a design which is called cooperative here meaning that the two attesters here so so the thing is now we have two attesters we could have either one verifier corresponding to each of these meaning the platform verifier verifying the platform attester and the ren attest the ren verifier attest verifying the ren attester evidence and what this is the split design and one could have a cooperative design in which the two attesters in which one verifier basically verifies the whole evidence which is here called as the remote evidence so this is the flexibility for both of these designs which we have considered and made it flexible to be reused in different implementations possible. I will give an example of the delegated design specifically which is that the initialization phase as I described now the two different platform at two different attesters on the last slide now I have the attesting environments of these HES and RMM in the delegated design RMM will be the one doing the signatures and I don't need to contact HES every time for doing this kind of signatures saving the round trip but as I said this channel is secure assume to be secure this is one of the advantages of verification that we know exactly which channels are assumed to be secure so the process is like this very high level like it requests a key pair this is RAK is the key pair for this RMM to do these signatures and HES has pre provisioned with the key which is CPEC and HES will derive a key pair for RMM it will sign and include the hash of the public key corresponding to this using its signing key CPEC and then it will send over the pair of this the key pair that it generated along with the platform evidence that it formed at this stage for the RMM to cache and to be reused in the attestation protocol so this is one of the delegated designs now RMM is provisioned with the key the attest in the attestation protocol phase RMM can now use this key again the secure and insecure channels are clearly labeled for verification which are which which what are the clear assumptions for the whole thing and the verifier is assumed to be pre provisioned with the public key which is required for the verification of the CPEC and it sends a challenge the ren forward this challenge to the RMM RMM includes this public part of this key pair that it has obtained within that within that attestation and that evidence will be signed using this key pair it doesn't need to go to the HES the delegated design and then both evidence is the platform evidence that it obtained in the first phase the initialization phase combined with the ren evidence both bundled together will be sent over to the ren to be forwarded to the verifier and then the verifier will verify that so this is the attestation protocol phase and at a very high level I will not go to the details but the whole verification as I said is also flexible cooperative versus split verifier this is showing the cooperative verifier which is doing all the steps one together and not separately for both evidences this is the cryptographically related steps signature verification and checking that the two are two attesters are cryptographically bound to each other via this hash the public part of the rack is equal to the challenge which is sent in the evidence and this is guaranteeing the freshness so we have the signature verification the freshness and then this cooperation between the two or the linkage between the two attesters then there is this life cycle state that whether it's secured or not that has to be checked and then some some compulsory or mandatory checks which are important for reference values and then some optional checks which are dependent on the use case and then we finally have this event which will be checked by the verifier okay switching to the TDX now how is that different from CCA the layered attester in this case is that we we see here the number of entities and each one of them is layered above the other which means that the PCE the provisioning certification enclave is certifying the TD coating enclave the trust domain coating enclave in this case and in the brackets we have here the important claims which are included in the evidence for TDQE for instance it's the QESVN the security version number and the QEMR signer for instance for PCE it's also the PCE SVN for TDX module it's SVN as well as the measurements and for TD it's static and runtime measurements which is in addition to the static measurements which are for SCX for instance it also has runtime measurements and the process is roughly at a high level like this we have one and two steps showing the initialization phase in which the PCE is attesting the TD coating enclave sending an evidence and then getting a certificate for that and then in the second phase that is the TD and the TDX sending its evidence and getting a remote evidence namely the TD coat here so this is overall the flow of TDX okay and just to show you at a high level again this is the initialization phase up to this point and then the attestation protocol afterwards which includes the local attestation between PCE PCE and TDQE and then the second phase between the TD and TDX module and the coating enclave so again the same layered structure shown in a different way in form of protocol let's now move to the threat model that we have for the verification of these protocols and that's one of the strengths of this approach we know exactly what we are saying to be secure against a specified threat model not saying that it's secure against let's say something vague but very specified that that means what are the adversary capabilities we need to define it precisely full control of the communication channel variable measurements meaning that it's not fixed as compared to our previous work a couple of years ago that I described that we had static measurements that one cannot adversary could not actually change the measurements and now if we can have we have the variable measurements where one can change these measurements entities are assumed to be honest as well as malicious which means that realm in case of arm and the TD in case of TDX are basically can be can have malicious activities inside and for both basically we take the measurements as input from the adversary that means basically that adversary can now generate any measurements that's relating to this point that is we have variable measurements and secure channels as I said as I also showed on the diagrams that we are clearly showing what channels are secure what needs to be and once so what we assume as secure once it is not secure so every guarantee is gone so so that's the that's the real crux of this thing quickly about the properties that are of interest for this kind of protocols are basically going back to the former methods slide that I showed we have a model which is constructed from a system and now everything is operated on that specific model nothing is related to that original system so one should be very careful that that system that model that we have generated is very much reflecting what was the original system so that's why we have the sanity checks ensuring that everything is correct integrity of the data that is this evidence is not modified by the adversary or at least we can detect that freshness of this evidence that it cannot be replayed confidentiality meaning that the attestation related keys which I showed in one of the slides for ARM and TDX that they are protected adversity cannot see that or generate that and authentication which is very closely related to integrity but precisely about the data origin whether we are talking about verifier authentication or attested authentication so quickly about the results that we have based on this verification effort that we have done for the CCA basically we had two attesters that's that's why that's because it is a composite attester and this table shows the results that the platform attester has integrity it has no freshness the same evidence is replayed every time because it obtained in the initialization phase and it's replayed each time so that's by design that's not a problem and authentication is not fulfilled by the architecturally defined attestation recall that we are not including the TLS protocol or the the transport protocols are not included we are really dealing with the whole crux of attestation and that's why we need to integrate them with the TLS to provide authentication for all these three properties are satisfied here's a snapshot showing that within two minutes the artifacts prove the properties and let's go to the TDX part which I described briefly that Intel claimed that this is the TCB in one of its white papers back in 2021 and what we did we analyzed that the whole certificate chain is like this which was never clarified in the specifications of Intel TDX and which looks very similar to SCX with the difference that here there is a TD instead of this enclave and correspondingly this is showing the certificates corresponding to these entities meaning that the TD has TD code QQE has a kessert and so on and these are the certificate revocation lists corresponding to that and if you see here carefully that's one of the things that all other software other than mentioned here is to be not trusted by the trust domain and this was something which was really suspicious for us to see how does it how is it possible not to trust one entity within the third chain that would see good generate another chain out of that if this is malicious so what we did is we analyzed it formally and that's why we have a based on Intel's TCB integrity freshness confidentiality none of these properties hold because PC is untrusted and adversity can then do the replay attacks then integrity is not satisfied and in our proposed we include PC as the trusted thing so we have both artifacts with a single line change you can analyze both of them try to be as flexible as possible and authentication again we have no guarantees because it's remote architecturally defined attestation and not the TLS protocols or any other transport protocols so we updated it oh sorry we reported it to Intel and Intel updated these as replacing this TD coating enclave with the TD attestation software which is that all the layers are to be seen but what is problematic is that it was updated on the same link replacing the old white one we have the evidence here that six times within the last six months it was changed on the same links and we reported this to Intel as well to be updated so that we have a transparency which is we are lacking in these kind of trusted execution environments so quickly about the verification challenges I already gave you some insights so I will quickly just describe that the CC artifacts was 400 pages we fortunately didn't need to read all because we had guys with us specification what in natural language a big hurdle for the verification to precisely describe it and various implementations are possible because it's architectural specification for TDX we had a separate set of challenges it's a whole product right so they were 1500 pages of specs and then it inherits what is in the SCX alone SCX itself as in the design you see these TD coating enclave and PC which is like 5000 pages of the software development manual alone specs are again in natural language and it's true for all of the vendors we don't have any one describing it formally and the specifications are ambiguous incomplete contradicting and outdated as one of the examples and specs are updated on the same link which is one of the critical issues in our mind okay let's let's kind of see what we what we take home from from all of this effort and here is what I have been discussing about that the transport layer is in on the top spdm or the tls protocol and here is the architecturally defined attestation the decap appeared decap or the platform attestor realm attestor for cca and this is the kind of thing that we have added to the picture now this is verified this is formally verified we can now integrate the two things which leads us to this attested tls project or again harboured at the ccc attestation sig and one can have different now compositions of that attestation evidence is generated before doing that tls handshake or in between that handshake or after that handshake the r etls mentioned in the last talk was in this category pre handshake attestation which has a possibility of collusion attacks and replay attacks and that's why intra is the way to go forward and that's what we are using in this solution we really need to have more discussions between the three communities systems engineering security community and four method communities and I really love this kind of events where we get to talk to each other to solve these problems to move the things forward and complete specifications are required to be made open source even if the code basis is the complete specification is something that is verifiable we can analyze that we can see whether the implementation is matching with these specifications or not with open source code okay we can we can we can analyze that as well but what against if the specs are not an implementation is there we cannot really do that kind of verification which we want and there is a need for systematic design of attestation protocols that is to be done the earlier the better we really have to do it there is no way around we cannot really say okay this is the design and then we it's broken and then we patch and then go again about this circle this will never end and when the things are trusted until they are formally verified so that's maybe the key point of all of this effort and here are some of the key references that I have used in my presentation and I would like to invite all of you to bring your expertise to this project which is available at this link there are some specifications available technically our research paper here and we are in the process of updating that our second draft will be updated available soon I really want to thank you all for your attention and I'm happy to take any questions comments criticism or critique anything that you might have we have a minute or so for questions so who's got a question I'm just going to say how important I think this is and the points on your previous slide about everything being designed in the open and specifications are so important for all of the sort of work we're trying to do we're trying to build we're trying to create roots of trust and without without that happening in the open it's basically impossible so I really appreciate all the work that the folks at Dresden and Arm and Intel have put in and it's great to see academia interacting with industry in this in these sorts of ways and open source so thank you so much for all the work any specific questions if not I'm going to take your side and maybe have a question later on that's fine we've got we've 15 minutes now for a break so please if you can find something to get some water or anything please do so it's a bit of a labyrinth round here we'll be back at a quarter past thank you very much indeed to everyone already thank you