 Okay, so this is a joint work with Raphael Paz and Elaine Chi from Cornell Tech and Cornell University. So let me start by first trying to put our work a bit in context sort of why we want these formal abstractions here by looking at the history of how trusted hardware has been viewed in sort of different communities. And we essentially can identify some two different trends here on the one hand in our own crypto community, especially in sort of theoretical cryptography. Hardware assumptions have usually been used as minimal set of assumptions that allow us to circumvent some theoretical impossibilities, I think say composable security. And so usually the goal here was more to get theoretical feasibility from sort of smallest assumption possible and with little concern about the actual practical performance that you end up getting from these protocols. And on the other hand in the sort of architecture and system security community as well, the goal has more been to sort of view trusted hardware as a way of getting trusted execution of say general purpose user programs with a focus more on the actual expressivity that the hardware would give you and sort of cost effectiveness and usability for many different programs. And it's actually interesting to see that various projects in this space both in sort of academia and industry, so from the hardware and architecture community have sort of converged to this notion of a tested execution. And well, first in this talk I want to sort of try and define this notion a bit more formally sort of see what this notion really is. And then sort of the more interesting question from a theoretical perspective, well what does this notion allow us to express well or not. So let's start from this standard setting where you'd have a client that wants to outsource computation to an untrusted server. In parts of this talk I might use sort of terminology and models that are sort of reminiscent of Intel's SGX, but the sort of aim is to cover the essence of a tested execution in sort of more general sense here. So the server who has access to a secure processor can then spin up a so-called enclave, so say an isolated execution region that can compute this program in isolation. And the trust is essentially bootstrapped by a trusted manufacturer that can embed a secret attestation key inside the hardware that can then essentially be used to remotely attest to the correctness of execution of a program. And so this is kind of a nice picture, it tells us kind of what we want from a tested execution from secure, from trusted hardware, but it's not necessarily a very precise abstraction to work with. So why would we want a sort of more formal ideal abstraction? Well on the one hand systems that are usually built on top then of trusted hardware have historically sort of tended to prove security in sort of ad hoc fashion because of a lack of a sort of formal model in which to work with. And so this is something that we'd hope to be able to fix if we had a formal and precise abstraction. But it's important to note that we don't and actually can't claim today that any secure processor that you can find on the market will actually realize any form of ideal abstraction that we can come up with. And this is sort of the, I think the next important step then in this area is that we'd actually want to have secure processors that can be sort of formally verified to actually implement some form of ideal abstraction. So let me now dive into the actual formal model that we work with. So we model a tested execution as an ideal functionality in a sort of UC style framework. And this functionality essentially will englobe all secure processors or all platforms from a given manufacturer in a registry and the interface of this functionality is pretty simple. So at initialization time, so at manufacturing time, it will generate public and secret key for attestation and so these keys will be shared by all platforms from the same manufacturer and at any given time a remote party might sort of be able to query to get this public key. And when a party that belongs to the registry, so a party that has a secure processor wants to install a new program, well the ideal functionality will spin up a new enclave and design it a unique identifier, a non-identifier that will allow us to identify stateful programs over time. And whenever the party that installed a enclave program wants to run it on a particular input, well we simply run the program that was stored and update its state, so essentially its memory, and then produce an attestation, so a digital signature under this shared secret key over the program that was computed and the output. So let me say a bit more about our modeling choices, so we model this attested execution ideal functionality in the UC framework, so why the UC framework, so first of all it's sort of worth noting that trusted hardware is probably not going to be used in isolation, it's going to be part of sort of larger protocols and where modular composition seems to be a sort of desirable property when we want to prove security of systems in this space, and so why the generalized UC framework, well this has to do with the fact that these attestation keys are sort of going to be inherently shared across protocols because, well, the way the system is actually set up in practice, all platforms inherently share state in this sense, and this actually means that attestations that are produced in one run of a protocol have a lifetime that sort of goes beyond that particular protocol, and this is actually a source of some technical difficulties that we have to work with, so a concrete example of a security issue that can come up from this that's sort of well known in the cryptographic community is this notion of non-deniability, so if one party produces an attestation in one run of a protocol, then this sort of provides undeniable proof that some party that belong to the registry, so that has a secure processor actually participated in this protocol, and I'll come back to this later. So let's now move on to this maybe more interesting question of well say we have such a formal abstraction then what can we actually do with it, and on the one hand this may be not very surprising, that we can show that this is a very powerful abstraction, and in particular it allows us to, we can show that it implies a notion of obfuscation for stateful programs, it's actually impossible to obtain even if we go sort of the full route of general cryptographic obfuscation and we also show that you couldn't be able to do this if your trusted hardware was stateless only, in the interest of time I won't actually be able to go over this in this talk, but I'll invite you to see our paper for the formal definitions and constructions we use here. The part I want to focus on a bit more is actually what to us was a bit more surprising is that if starting from this assumption we try to get sort of full UC secure multi-party computation, then things actually turn out to be somewhat more complicated, and I'll go into this now in a bit more detail. So for simplicity we can sort of look at just two-party computation where we have Alice and Bob that want to commonly compute some function of their inputs, and here what we can show is that actually when both parties have a secure processor, then it's actually somewhat easy to get universally composable secure two-party computation. However if one party actually doesn't have a secure processor, so in this case Bob lost his, then we actually show that getting two-party computation in a UC secure way is impossible, and this is somewhat counter-intuitive. So if you recall the sort of informal picture from the beginning of this talk where we looked at how a client might want to outsource computation to a server, in this picture we just considered that the server will have trusted hardware and the client doesn't necessarily have to, but it turns out that it's very hard to prove an ideal notion of security in this model, and sort of maybe the most intuitive way to see what some of the issues can be is through this notion of non-deniability, and so again here if in this protocol Alice was the only party who has a secure processor at any time actually uses this processor to compute an attestation under this globally shared key, then if the other party is malicious it could sort of just use this attestation to convince anyone else that some honest party that belonged to the registry of this hardware platform actually participated in the protocol, and well this is something that first of all the ideal notion of two-party computation doesn't really allow for, and it's also somewhat intuitive to see that this wouldn't be an issue if both parties had a secure processor because then well Bob could have just came up with this attestation himself, so this sort of provides a notion of plausible deniability for Alice. So maybe one of the more technically interesting results we look at as well, what if we sort of really really wanted to do things with a single secure processor, so it seems sort of more interesting in practice if not every single party had to have a processor from the same manufacturer, and so because of these impossibility results we actually have to rely on extra setup assumptions, and here we go for an assumption that has already been used in the space of sort of composable MPC, which is this notion of an augmented common reference string, I won't give a precise definition but you can think of this as essentially a setup that for honest parties is essentially the same as a standard CRS, but it also allows malicious parties to sort of query for so-called identity key, which is essentially just a signature of the party's identity that is then publicly verifiable, and so it's a important note here that the honest parties will sort of never have to interact with this CRS during the protocol, and although we already know from prior work that actually just from this augmented CRS you can get secure MPC, the protocols you can get if you include trusted hardware are somewhat interesting because they sort of have a communication complexity between parties that doesn't depend at all on the complexity of the program that you're trying to run, because essentially the program will just run inside the secure hardware and so this is something that might be interesting to achieve. From a technical point of view, and this is something that I'll go into a bit more detail, what's actually quite interesting is that to get sort of for the simulation proof to go through for us to actually be able to prove security, we need to embed some backdoors into the program that's run in the enclave, which is somewhat surprising, this sort of notion of back-during programs is something that's come up in work on say, indistinguishability obfuscation, but it wasn't necessarily clear to us that this would pop up in this setting as well, and actually most or not maybe all of the protocols that we show in our paper need some notion of backdoor in the programs. So let me show a concrete example here, so this is for this MPC protocol where we'll assume that there's sort of one distinguished party that we call the server that will host this sort of single secure processor, and there's a bunch of remote parties that want to do MPC, and we have this augmented common reference string as well. The way the protocol works is that the program that's running in the enclave will start by sort of generating public key pairs for each party and send these out together with an attestation. Here in the interest of time, I'll sort of gloss over some details, especially in the actual protocol the server has to replace this attestation with this indistinguishable proof to sort of get rid of this non-deniability issue. I'll let you look at the paper if you're interested in the details about this. And sort of after this, the construction is pretty standard, so the parties can do key exchange with the enclave, send their inputs encrypted under shared symmetric key. The enclave collects all the inputs, computes the function, and sends out encrypted outputs. So let's now look at sort of the interesting thing of how we would prove security of this, how we would do the simulation. So here we consider that the server is malicious, so in particular the simulator will be able to also query the trusted hardware. And here we'll have to sort of embed trapdoors that will allow the simulator to extract the inputs and equivocate the outputs for malicious remote parties. And the way this works is that if the simulator wants to know the input that was given by some malicious party, it will first call sort of one function in the enclave that allows it, if it knows the identity key that it got from this augmented common reference string for a particular malicious party, well then the enclave will just return the corresponding secret key. And so because honest parties will sort of never interact with the common reference string, then this sort of allows the simulator to extract inputs from malicious party, but it doesn't affect the security for honest ones. And in a similar way, once the simulator actually learns the output, it sort of has to program the enclave to produce this output for malicious parties, and it can again do by sort of calling another backdoor function inside the enclave program that is sort of never going to be used in an honest run of the protocol. Okay, so let me now move on to another of our positive results, which has to do with fairness in two-party protocols. And here by fairness essentially means the standard notion that we want to make sure that if one party learns the result of the computation, then the other party should also be able to obtain the result after maybe some bounded amount of time, even if the first party aborted. We actually know that for general functionalities this is impossible in the plain model, this is a celebrated result by a cleave. And sort of natural question is, well, could trusted hardware help achieve these notions of fairness as well? So for this we consider a sort of enhanced model that we call a clock-aware secure processor. It's essentially a piece of trusted hardware that has access to a source of relative trusted time. And here we can again show that assuming that both parties have such secure processors, we can actually get fairness for general two-party computations. Again, if one of the two parties actually doesn't have a secure processor, things sort of break down. But we can show that for specific functionalities such as coin tossing, for instance, we can get fairness even in this setting where a single party has a secure processor and we also have this augmented common reference string. So this protocol for fair 2PC is actually relatively simple. So let me go over it quickly. Sort of a pretty standard construction where first the two parties will have their secure processors establish a secure channel over which they can actually exchange their inputs and then they can compute the output. So you sort of perform the actual two-party computation. And at this point, the two enclaves will actually just decide to withhold the outputs for a sort of predefined exponential amount of time. And then they will sort of start this tit for tat communication where in sort of iterative fashion they will agree to have the amount of time that they have to wait until they will actually release the outputs to their hosts. This wouldn't be possible if the processors weren't clock aware. And well, it's easy to see here that if one party sort of gets its output at some time t then the other party sort of needs to wait at least at most twice that amount of time to also get its output. So we get a nice notion of fairness. And compared to sort of prior approaches here, what's nice in this setting is that the enclaves don't actually need to do any sort of wasteful computation such as if you had, say, time lock puzzles because sort of in an honest run they just do a number of back-and-forth communications and if one of the party aborts the other party's enclave just sits idle for a certain amount of time. So let me conclude by just sort of looking a bit at the future directions given by our work. So in this work we've looked at sort of formal abstractions of trusted hardware. We've shown that attested execution is a very powerful primitive that allows us to do a lot of fun and interesting things but that there's also some subtle issues that can arise because of this shared, because keys are essentially shared across all protocols. And as I alluded in the beginning of this talk what sort of the next logical step in this direction is to actually come up with a secure processor design that could be formally verified to actually implement a precise formal abstraction which would then sort of allow us to get provably secure implementations of systems on top of trusted hardware. Thank you. Thank you. We still have time for questions. So you said your model allows for getting stateful obfuscation, right? And my understanding is that in case of Intel SGX you don't actually get stateful obfuscation because it leaks some sort of access patterns. Yes. So I'm just wondering where the gap is. So this is what I mentioned in the earlier part in the talk today we can't show for any particular processor on the market that it actually realizes the sort of ideal abstraction that we want. In the case of SGX this is for two reasons. One, because there are these side channels that sort of show that things aren't exactly as secure as we want and on the other hand that even if we could get rid of these side channels there's actually no way of sort of formally verifying what the processor actually does. And so that's where the gap currently is, is that sort of between the actual implementation and the formal abstraction there's sort of a gap. So I understand that in your formalism to run an enclave you need a special instruction. So now is your notion of program allows to make these special instructions? So I mean can you run an enclave inside an enclave? What do you mean by that? So I mean that to run a program inside an enclave which will ask to run a program inside another enclave. So I don't think we need this in any of our constructions. It's possible that our formalism would allow you to do it but none of our protocols sort of require the sort of a circular notion of sort of programs to call other programs. So for instance the trap doors that we have in some of our constructions are really part of the original program that is loaded inside the enclave. So your impossibility result for secure computation with single trusted hardware is applied on the fact that you don't get a deniability. So you can imagine defining a notion of non-deniable secure computation and then I wonder whether you looked at that and whether that could be realized. So we didn't look at this specifically. This is sort of an approach that's been taken in quite a number of prior works of sort of weakening the actual ideal functionality that you want to realize. And I don't see a reason why this wouldn't work in our case but we didn't look at this specifically. Thanks. Any more questions? Yes, Evgeni has a question. So I just want to make sure I understand. So in this setting that trusted hardware, I mean it's completely trusted. It's not like a small thing. It can run like everything. It's like has memories. There are no side channels. I just want to make sure but usually the kind of problems that people have is like you know several people around this thing and things share resources like cash and so on and you know there are all those kind of side channels. So here you just assume there is a process. If you prove you see security, if it's run several times, I mean essentially nothing is leaked. It's just a completely trusted and it will sign everything. I just kind of want to make sure that I understand the model. Is it correct? Right. So you essentially would leak the sort of size of inputs and outputs and the size of the function but it is a sort of strong model in that yeah we assume no other side channels. We actually consider a much, much weaker model that we also proposed in a prior work where we actually assume that everything leaks. So nothing that is executed inside the enclave will actually remain private except for the attestation keys and this is sort of interesting setting because in one sense you could argue that it's much more relevant to what we actually have in practice and you can still do some pretty interesting things like zero knowledge proofs and UCCQ commitments in this much weaker model. Where did it appear? So the prior work appeared at USMP earlier this week and also in this paper we sort of formalize this model more precisely.