 Hello, my name is Rabid Muslim and this is joint work with Professor Anne Robben from the University of Ottawa. Let's start by considering the typical scenario of an encryption scheme. Alice encrypts some message with a key and sends the resulting psychotech to Bob. If Bob has the key, he can decrypt the psychotech to retrieve the original message. Now let us suppose that Bob has received the psychotech from Alice but does not yet have the key to decrypt it. In this case, Alice might ask Bob to delete the psychotech. Maybe she's tempted by accident. Of course, it would be useful for Alice to have some kind of proof that Bob deleted it. More precisely, Bob should have a way to relinquish his means of retrieving any information of the original message even given that he may receive the key in the future. And he should have a way to prove to Alice that he has done this. We call this the certified deletion property. Clearly, if the psychotech consists only of classical information, this is impossible. Bob is always able to make a copy of the psychotech. No matter what Bob does with the original to create some kind of proof, the copy of the psychotech can always be decrypted once the key is received. Therefore, if we are to provide Bob with a way to perform certified deletion, we need to consider an alternative to a classical psychotech. At this point, we should consider using a quantum psychotech. After all, thanks to the no-cloning theorem, we know that there is no map that will create an identical copy of an arbitrary quantum state. But even given this, one may wonder, how will Bob prove to Alice that he has deleted the quantum psychotech? This question can be answered through the use of entropic uncertainty relations. These relations state that if a qubit is measured in, for example, the computational basis, then information is lost as to what the measurement outcome would have been, had it been done with the head of our pieces. So we can use conjugate coding measurements in order to achieve the desired property. The idea of certified deletion that we're presenting here is not brand new in our paper. And it does have some precedent in the previous literature. Unroof, for example, showed that a quantum encoding can be used as a means to show that a psychotech has been remote in the context of time-release encryption. The revocation process he discusses, however, is fully quantum, while the deletion process we describe is classical. There are also clear technical differences in the proof. He uses techniques related to CSS codes and quantum random oracles, whereas we use privacy amplification and uncertainty relations. And our work is not in the quantum random oracle model. Moreover, our work is outside the context of time-release encryption. Foo and Miller, more recently, published the first paper on certified deletion, showing that verification of deletion can be done with classical interaction only. In particular, they showed that via a two-party nonlocality game with classical interaction, Alice can become convinced that Bob has deleted a single bit psychotech in the sense that we discussed previously, which is that the deleted state is unreadable, even if Bob were to later learn the decryption key. This result was shown in the device-independent setting. The security they showed would hold against arbitrarily malicious quantum devices. Independently from us, Poitre-Rois and Wolves touched on the idea of provable deletion using quantum encoding. However, their work is not explicitly concerned with encryption schemes, whereas we are explicitly concerned with what it would mean to delete a quantum psychotech. That being said, there are similarities between our scheme and their proposed scheme in that they both make use of conjugate coding, or BB84 state, to prove deletion. Moreover, although quantum key distribution is not intrinsically related to quantum encryption with certified deletion, it would be instructive to compare our work to the existing literature on BB84 QKD. In regard to the adversarial model in certified deletion, Alice is honest and Bob is cheating. While in QKD, Alice and Bob are honest while Eve plays the role of adversary. As pertains to the interaction model, certified deletion is almost non-interactive, while QKD involves multiple rounds of interaction between Alice and Bob. Nonetheless, our work involves conjugate coding, just like BB84. And the privacy application, error correction, and un-tropic uncertainty relations we use are similar to those discussed in Tom and Michelle MFA's treatments of BB84. Now, I will begin discussing the content of the scheme. I would like to start by giving a rundown of a more basic version of the scheme. As the full version is a little bit complicated, after this, I will discuss the general approach followed in the proof and then error tolerance. But before we detail the scheme, let's discuss some parameters. We let N be the length of the message, and we let N be the total number of qubits sent from Alice to Bob. Next, key generation. Alice samples data uniformly at random from bit strings of length M, with Hemingway K, where K is less than M. Theta is the basis in which she will encode her qubits. If theta i is equal to zero, the basis of qubit i will be computational. And if theta i is equal to one, the basis of qubit i will be Hadamard. The constant of the qubits will be a bit string of length M called R. Her next step is to determine the part of R that is encoded in the Hadamard qubit. She samples a length K bit string uniformly at random. We call this R-diad, and we also refer to these bits as the check bits. Next, she samples a length N bit string uniformly at random, that's called a Cu. Finally, she samples a hash function from a universal 2 family of hash functions. The domain of these hash functions is bit strings of length M minus K, and the co-domain is bit strings of length N. All of these bit strings, along with the hash function, comprise the key. Next, we look at how Alice encrypts a given message with her key. First, she samples our comp uniformly at random from bit strings of length M minus K. These are the bits that will be encoded in the computational basis in her qubit. Then, she computes X as the output of the hash function on our comp. Now, she has all the ingredients for her ciphertext. It's part quantum and part classical. The quantum part is the full string R encoded in weakness states according to the basis theta. The classical part is the message XORed with X and U. Now, if Bob has the key in ciphertext, he can decrypt. For the sake of simplicity, let's put aside concerns about noise for now, either taken care of with error tolerance techniques, which we discuss later. Since he has theta, he knows which qubits are encoded in the computational basis. So he can just measure those qubits to recover our comp. Then, Bob applies the hash function to our comp, and this yields X. Finally, he XORs the classical part of the ciphertext with X and U. This recovers the original message. Now, what if Bob wants to delete? All he has to do is measure all of the qubits in the Hadamard basis. This will provide him with a bit string Y, which he can send back to Alice as proof deletion. Upon receiving this proof of deletion, Alice can verify its authenticity. Using theta, she takes the substring of Y, the proof, which corresponds to the diagonal positions of her qubits. We call that Y prime. And she compares that to the R diag. Ideally, if Y is a genuine proof of deletion, the Hadamard weight of Y prime XOR R diag should be zero. One can easily account for some error here, and we do so by accepting the verification if the aforementioned Hadamard weight is less than K delta for some parameters delta. It's worthwhile to know here that in the formal setting, parameters like M and K actually vary according to a security parameter lambda. Specifying this allows us to more precisely determine bounds relevant to the security of the scheme. So how does one add error tolerance to this scheme? Well, it's a little involved, but here's the gist. Recall that linear error correcting codes can generate error syndrome, and that corrections to a message can be made when given the syndrome of the correct message. So as part of key generation, Alice samples another hash function from a different family where the domain is strings of length M minus K. She also samples uniformly at random two strings, one of the lengths of a syndrome in another of the length of the hash function's output. As part of encryption, she uses one of these strings in order to encrypt the syndrome of R comp by XORing, and with the other, she encrypts the hash of R comp by XORing. These two new values become part of the PiperTech. Bob can decrypt in the obvious manner and use the syndrome to correct R comp. The process so far ensures correctness with high probability for certain noise thresholds. However, in order to ensure robustness, that is, in order to ensure that errors and decryptions are detected with high probability, Bob compares the hash of his version of R comp with the hash he receives from Alice. If the hashes are not equal, the decryption is understood to be faulty. Then there's the question of how secure the scheme is as an encryption scheme. Well, thanks to the long length of the key, the scheme achieves perfect psychotext indistinguishability. This is somewhat obvious due to the uniformly random string that is XORed with the message as part of encryption. As part of our paper, we've developed a formal definition of certified deletion security that can be used to evaluate any proposed certified deletion encryption. That is, any quantum encryption scheme for classical messages, which includes deletion and verification circuit. However, instead of going over the formal definition and isolation, I will go over a summary of the proof, and the spirit of the definition of certified deletion security will become apparent. Let's look at a game involving the circuit of this game. This game aims to exhibit what might be considered an attack on the certified deletion property of our scheme. Here, Alice will be the party who encrypts and sends a message while Bob will take an adversarial role. The game begins with Bob selecting any candidate of his choice and passing it, any candidate message of his choice and passing it to Alice. Alice generates the key with the key generation algorithm. She then takes a bit B uniformly at random. If B equals zero, then she encrypts a dummy message, let's say the string of zeros of the same length of the candidate message that she got from Bob. If B equals one, she encrypts the candidate message. She passes this ciphertext to Bob. Now, Bob generates a bit string that is n bits long. This serves as the purpose of a proof of deletion, though he does not have to actually delete the ciphertext. Bob sends this string to Alice, who then runs the verification algorithm on the string. This outputs a bit okay. Alice's last action is to send the key she generated to Bob. At this point, Bob takes a guess as to the value of the bit B that Alice chose. This is as though he is determining whether the message that was encrypted by Alice was the message that he sent to her or a different one. Now, we can think of the adversaries wanting to do two things simultaneously. One is to determine whether his message was encrypted in the ciphertext he received. And two, convince Alice that he is actually deleted the ciphertext prior to receiving the key. Therefore, we may say that the security of the scheme holds if the probabilities of the two following events are negligibly close. The first event is that verification passes and Bob's guess of B is one. In the case that Alice encrypted the string of zeros, that is B equals zero. And the other event is the verification passes and Bob's guess of B is one in the case that Alice encrypted the candidate message that is B equals one. If this condition were to hold true, then it would be very difficult for Bob to achieve his goal, out of getting a verification pass and to properly get B with any reliability beyond a third chance. It may appear somewhat intuitive that this condition holds in our scheme. On one hand, Bob is incentivized to measure as many qubits as possible in the Hadamard basis in order to send Alice a convincing proof of deletion. However, measuring the qubit encoded in the computational basis using the Hadamard basis will yield the incorrect value with a probability of one half. The more qubits measured in the Hadamard basis, the more likely it is that information about our comp is being lost to Bob. This combines with the fact that the use of a hash function is necessary to retrieve an important value in the decryption makes it extremely unlikely that Bob can learn anything about the encrypted message even after receiving the key. On the other hand, if he were to measure as many qubits as possible in the computational basis in order to get our comp, the check bits would likely be measured incorrectly, resulting in an invalid proof of deletion. Although he might be able to determine the original message given the key, he will have failed in giving a proof of deletion which passes the verification algorithm. While the intuition may be fairly straightforward, the actual proof of deletion security is not that simple as the game we provided above is difficult to analyze. As a result, we constructed a different game consisting of an entanglement-based sequence of interactions that is easier to analyze. We then used a reduction to show that statements about game two could translate into statements about game one, ultimately implying the existence of some bound relevant to our scheme security. Game two goes as follows. First, Bob selects a candidate's message and sends it to Alice. Bob also prepares a tripartite quantum state with any entanglement he wishes, each part consisting of m qubits that's named these parts a, b, and b prime. He sends a to Alice and keeps b and b prime for himself. He immediately measures the b system in the Hadamard basis and obtains the string y of length m that he sends to Alice. He hangs on to b prime. So then, much like in key generation in our scheme, Alice samples a theta of length m with hamming weight k, a string u of length n, and a hash function. She measures a using theta as a measurement basis. Let's call the output r. We can then break this into r comp and r diagonal like we have in our scheme. She also computes the hash of r comp. This is x. Then she chooses a bit b at random. And so that's either an end length string of 0, a string of 0, or the candidate message that Bob sent to her. Now, Alice checks the hamming weight of r diagonal x for y. If this is sufficiently low, we get a flag that verification has passed. Then she sends an encrypted message along with the key. Finally, based on all the information Bob has accessed here, including the b prime state he saved earlier, he makes his guess as to what the bit b is. Now, we can see how this is very similar to game one in essence. The entanglement that Bob creates in the beginning is analogous to the way that he measures his qubits in game one. For example, if he were to measure everything in the Hadamard basis in game one, that would be the same as fully entangling states a and b in game two. In both games, he gets r diagonal. If he were to measure everything in the computational basis in game one, that would be the same as fully entangling states a and b prime in game two, and then measuring b prime in the computational basis at the end in order to make his guess. In both games, he gets r comp. But how does this new game help us prove the security of our game? Well, the entanglement-based setting allows us to apply entropic uncertainty relation. In particular, we use one that was formerly related by Tom and Michele, and which has its precursor and work done by Tom and Michele. It is a statement about smooth minutes max entropy, and it describes clearly the information trade-off that Bob is making in game two. The full practical meaning of it is a little involved, so I will not go into it here, but we use to show that if the verification test is passed, then the information that Bob has access to about r comp is low with hyperlibrality. On top of this, the hash function serves in the capacity of privacy amplification. In order to formalize this, we use the leftover hashing lemma developed by Renner. In practical terms, this lemma allows us to turn a lower bound on Bob's uncertainty about r comp into a statement about how close the output of the hash function is to a uniformly random string. Ultimately, this blocks Bob from obtaining any information about the message that Alice did. While the current work is foundational in nature, we have thought of a few ways in which certified deletion may be useful. One concern that is played to cryptographers is leakage of the key after a certain amount of time. This is a particular concern in public key cryptography, where the secrecy of a private key is typically guarded by computational assumptions. Of course, under the definition we discuss here, if a scheme has certified deletion, then we can have some extra reassurance about the secrecy of the message data if deletion is performed properly. While this definition exists in the private key realm, one may be able to adapt it to the public key setting. Also, in recent years, people have been increasingly concerned about data retention, so much so that the EU has passed the regulations, including a clause on a right to be forgotten, meaning that a person should be able to have their data erased whenever its retention is no longer necessary. Certified deletion encryption might be able to provide a level of accountability Moreover, depending on the composability of our scheme, certified deletion may be able to play a role in everlasting security. That is, it may be able to transform a long-term computational assumption into a temporary one. We can see further discussion on this topic in our paper. So, in summary, what we ended up doing is using somewhat BB84QVAD style logic to develop a new scheme for a relatively new security definition. We have hopes that the framework of certified deletion encryption can facilitate some new applications. We have identified some potential next steps. For one, it may be worthwhile to look into the composability of our scheme or some other certified deletion encryption scheme. It would also be useful to look into homomorphic encryption with certified deletion, so that search protects would be useful to some extent while the option of deletion is maintained. With that, I conclude my presentation. Thanks for listening.