 My name is Lorenzo Martinico and I'm here to present Steel, a Composable, Hardwood-Based, Stateful and Randomized Functional Encryption for PQC 2021. This is a joint work with my collaborators, Prama Bototia, Mark of Colvice, and Yannis Siliconis. Over the past 20 years, the Cloud computing paradigm has taken a halt across the industry. There are many advantages to using the Cloud, such as increased inefficiency and usability, especially when it comes to deploying large-scale machine learning models. However, as a consequence of using the Cloud, when a client uploads their data, they lose control over it. As a motivating example, take the problem of detecting fake news in encrypted messages. When a user receives a suspicious message, they either have to evaluate by downloading a heavy machine learning model under client, which is inefficient and does not allow a way for the user to provide some feedback to the wider community, or they can decrypt their message and upload it to the Cloud for analysis. This is the consequence of revealing the user's entire correspondence to the Cloud. A potential solution is that of deploying public key functional encryption. This primitive provides a general notion of private data analysis, where while the user's input remains secret, the output of a function is revealed publicly. Thus, a decryptor can be authorised to compute some function in a way that the user's inputs remain private. Let's look at how the primitive works into bit more detail. We have three parties, Alice, who wants to encrypt some data, Bob, who is a potentially malicious decryptor, and Charlie, a trusted authority. Charlie initialises the primitive by generating some public parameters and sends them to Alice. Bob can request de-authorisation for decrypting a function f from Charlie, who provides a functional key. Now Alice can encrypt a message x using the public key and send a cybertext to Bob, who runs the decryption algorithm to produce f of x for the function that'd been authorised to decrypt, without ever learning the value of x. While the primitive fits well within our problem setting, there are some limitations currently, especially when it comes to practical implementation. There aren't many functions that we are able to efficiently compute today. These are limited to inner products, mostly, and if you'd like to embed this primitive within a larger protocol, we can't rely on composability in the standard model, but have to use other assumptions, such as the random oracle model. A potential solution to these issues is that of using trusted execution environments. These are modern extensions to CPU architectures that increase the security properties on the host. In particular, this is a good fit as in 2016, Fisch et al. realised a functional encryption protocol using trusted hardware. Our contributions in this paper are that of generalising functional encryption to provide additional functionalities, which includes stateful and randomised functions. We also extend the RM protocol to compute Fesser functions and formalise its security under the UC model of pass ed al 2017, which we call PST. We then relax the PST model to capture additional adversaries. Thus, we have our protocol steel, which is composable as it uses the UC model of PST, hardware-based as its security relies on trusted execution environments, and competes stateful and randomised functional encryption. Let's examine the Fesser functionality in a bit more detail. We still have Alice, Bob and Charlie in the same roles as before, and an ideal functionality Fesser, which holds some state. Here, at first, the state is initialised to null. Charlie can authorise a function F for decryption, and Alex sends her first message X0. The functionality then samples randomness, computes F over Alice's input and state, and produces both an output Y0 and the new state S1. Bob is only sent the output Y0. Next, Alice uploads a new ciphertext, a new message X1, and again, we compute the function F over X1 and S1, which produces a new public output Y1, and updates the state to S2. The properties for this primitive are confidentiality and correctness. The former entails that Bob, our malicious party, can only learn the output of authorised functions. In particular, it won't be learning the state either before or after the function application, nor the input or the randomness. For correctness, we mean that the state, at any point during the computation, is determined by the sequence of previous decryptions for that particular function, for that particular instance of Bob. So the only way for some Bob to influence the value of the state is that of providing different input sex to its decryption functions. Let's look at the architecture of trusted execution environments in a bit more detail. A TEE allows a host machine to instantiate one or more enclaves, each of which is running a program P, where the contents of the enclaves are kept secure from other processes running on a machine, including an adversarial operating system. In particular, an enclave provides confidentiality as the host is not able to observe the behaviour of the enclave in terms of its code, nor the data it's accessing. The enclave also provides integrity by generating an attestation, which corresponds to some kind of cryptographic signature over the program being executed within the enclave and the output it produces. The details of the many different architectures for trusted execution environments can be nicely abstracted through the UC Global Functionality GAT implemented in the PST model. It has a simple interface, where at first the manufacturer can initialise the functionality by producing some public parameters, and each party within the protocol, regardless of their access to trusted execution environment machines, can get the master public key. Then for a machine that does have access to a TEE, they're able to install a program on the machine, which generates a unique enclave, or they can resume an enclave, addressed by its enclave, by giving it some input. This executes the program for the specific impit, and returns both an IPID value and a signature sigma, which corresponds to attestation. Let's examine the iron and steel protocols. The two protocols are very similar, although they compute different kinds of functionality. In both, we both have Bob and Charlie, as machines equipped with SGX or a trusted execution environment. The encryption phase simply corresponds to public key encryption, for which the key material is kept secure storily within different enclaves, and only exchanged after the enclaves that have attested to each other that they're valid. Functional keys correspond to signatures over some kind of function representation, for which the enclaves check whether they're valid. Let's look at the steel protocol. Each party in this protocol is equipped with a CRS. We begin by having Charlie generating some public key encryption parameters within its key management enclave, and returning the public key to user space. This is now sent to Alice for future encryptions. Bob attests the fact that he is running a genuine decryption enclave to the key management enclave in Charlie, who in returns sends the master public key and master secret key for the PKE schemes. Now Bob can request authorization for computing some function f, for which he receives a functional key, which corresponds to an attestation signature over f. This is one of the instances where steel in Darren, the barge, a siren utilizes a distinct signature scheme. Now Alice generates a ciphertext for some message x, and a plain text proof of knowledge that she does have access to both x and r at the time of encryption. She sends these values ct and pi to Bob for decryption. Now Bob initializes, if he hasn't already, a functional enclave for the function f that he wants to compute. This enclave is created with initial state null. The functional enclave sends an attestation to the decryption enclave to convince it that it is actually a genuine functional enclave for the function f. And the decryption enclave returns the master secret key to the functional enclave, only if it received a previous attestation signature for f. Now the functional enclave verifies that the plain text proof of knowledge for the ciphertext is valid, and decrypts the ciphertext using the master secret key it received. It then can evaluate the function over the original message and the state, which returns a public output y and the new state that he can use to update his functional enclave. A proof is in the uc with global subroutine setting, an extension over modern uc, which includes global subroutines as part of a plain uc protocol. As with all uc proofs, we're trying to construct a simulator in the ideal world on the right hand side, which can convince the environment that is communicating with the protocol on the left hand side in the real world, whereas it's actually just communicating with the ideal functionality. Because of the addition of the global attestation global subroutine, we have to set an entity bound on which parties can communicate with the GAT functionality. In particular, in the challenge session, we only allow an adversarily corrupted party to communicate with GAT. In our case, this is limited to b. This allows us to control all communications between GAT and the environment through the simulator. Since both Bob and Charlie in the real world require access to the global attestation functionality, we're then forced to run both Bob's and Charlie's enclaves under Bob. Luckily, because we're using anonymous attestation, we do not produce a trace that Bob is actually running the key management enclave. And because Bob is completely controlled by the simulator, they're not able to report that they're running Charlie's enclaves as well. To successfully simulate a state-filled functionality in the ideal world, we are required to pass all possible inputs to the decryption function through FESR, so that its state continues to update. If the environment uses to encrypt a cipher text using the public key encryption scheme, this would be fine in the real world, as Bob's functional enclave is allowed to decrypt using its master secret key and can then evaluate the function. However, in the real world, the FESR functionality does not have any notion of the public key encryption scheme. Thus, we introduce a plain text proof of knowledge to be sent along with a cipher text in our protocol so that the simulator is able to use its raptor to extract the original message from the proof and can pass it to the ideal functionality for evaluation. Additionally, because in the real world, the evaluation of a function is conducted by an attested enclave, the environment might expect us to request the attestation signature for evaluation. This is easy to provide in the real world protocol, but in the real world, the function output is produced by the ideal functionality and this does not attest it. To resolve this issue, we leverage a solution presented in the original PSD paper, whereas we insert an intentional factor in the evaluation subroutine. This factor allows us to sign arbitrary messages, which you can then present back to the environment. The insertion of the factor in the real world does not impact security because Bob is in control of the function code and thus will be able to distinguish whether the value of the function evaluation is authentic or not. Our proof relies on several cryptographic assumptions, in particular the attestation signature is existentially enforceable, the secure channels between enclaves are CCA secure, our plain text proof of knowledge is simulation sound extractable and the encryption scheme used for message encryption is CPA secure. For all these assumptions, we have a corresponding hybrid and reduction in the proof. We also model the addition of rollback adversaries to the global attestation functionality. These adversaries are able to conduct both two kinds of attacks, rolling and working attacks. We modify the global attestation functionality so that rather than storing a single state, it stores a tree of derived states. In the honest case, this will look very similar to a length list as in the figure where between each evaluation, the memory simply advances. However, a corrupted party can specify whenever they resume the enclave, an arbitrary location in the tree to resume the call from. Take as an example an enclave that is storing some kind of ledger. For each transaction, the ledger state will advance. However, if an adversary is trying to produce a double spend, they will be able to roll back the state after they have conducted their first transaction into the original state, which takes their balance back to what it was before the transaction took place. This is a rollback attack. Alternatively, they're able to maintain two different histories of the ledger concurrently. This allows them to show a different balance to any party that might be trying to interact with the ledger. This is known as a forking attack. Several mitigations have been proposed in the literature to address this kind of state continuity issues. The Intel SGXTE provides a set of hardware monotonic counters that can be used to sync the state of each enclave. However, these counters are quite slow to use in practice and are vulnerable to wear levelling. An alternative, more efficient formulation is that of asynchronous counters which need to be synchronized less frequently than the monotonic counters. However, only protocols where the history between each countersink can be reconstructed are suitable for this defense. Other papers such as Rote and Lightweight Collective Memory have devised network protocols where the security relies on communicating with other remote parties and guarantees are dependent on threshold security. However, these incur quite heavy costs. Of course, another way to avoid rollback and forking attack is that of building stateless enclaves where between each execution of the program no state is held. This is what the iron protocol achieves as there is no significant state held in between each of the deterministic computations. Iron is secure against these kind of attackers. When it comes to steel, rollback protection can be achieved by simply adding it to the decryption enclave. This can then establish a protocol with each of the functional enclave to check that their counterbodies are fresh. This allows increasing efficiency as any of the above measures would be quite costly to implement for all enclaves in the protocol. In conclusion, our paper tries to answer three main research questions. How to strengthen functional encryption to compute a larger class of functions in an efficient manner? How to best model cryptographic protocols that use trusted execution environments composably? And what are the limitations of trusted execution environments if we introduce rollback and forking attacks? Thank you for listening. You can call me at this email and the full print paper is available at the link below.