 All right, thank you. So I'll be talking about our work on asymmetric message ranking for end-to-end encryption. Nowadays, more than a billion users are communicating on platforms like WhatsApp, Signal, and Facebook Secret Messenger. These services provide confidentiality and integrity, meaning the platform can't read or modify messages. And additionally, they aim to provide deniability. It shouldn't be possible for other parties to attribute messages to a sender, even the event that the recipient is compromised or is maliciously leaking messages. Another security property that we might desire of these messaging systems is metadata privacy, in which the identity metadata of who is communicating with whom is hidden from the platform. Most work on this last property, metadata private messaging, has been confined to the research community, but just in December of last year, Signal deployed a limited form of metadata privacy that they call sealed sender. So in Signal's sealed sender, the sender identity is hidden from the platform when a message is sent. So all these security properties are great, but what about abuse? There are countless different forms of abuse that surface on these online platforms. To name just a few, Alice could be a cyber bully or an abusive ex-partner harassing Bob with abusive messages or perhaps a spammer sending annoying or potentially even dangerous bait messages in the hopes of link clicks. Or actually even more recently, we've seen in the news about the prevalence of these misinformation campaigns which have contributed to political instability and in some cases even inciting riots and lynchings. So in response to maintain a healthy platform and hold users accountable, these services have chosen to introduce content-based moderation. Where Bob can report a message from Alice to a moderation system that in practice typically involves a combination of machine learning and human review in order to judge whether the platform's content policy has been violated. In this case, punitive action against Alice could be taken such as banning her from the platform. So in fact, content moderation has become a very big priority to some of these services. For example, Facebook employs over 15,000 content moderators and while that stat is over all of their services, it still illustrates the point that content moderation has become an integral part of their platform. However, end-to-end encryption and privacy complicate the story of content moderation. The moderator doesn't see the message plain text until Bob reports it, so they can't determine whether Alice actually sent the message or if Bob is just saying Alice sent it. So in this work, we ask if we can balance the need for accountability via content moderation with the privacy goals that we desire of metadata private end-to-end encryption, end-to-end encrypted messaging. So we're able to answer this question in the affirmative. Doing so led to us introducing a new primitive that we call asymmetric message franking for reasons which will become clear momentarily. And this asymmetric message franking can be paired with existing metadata private messaging services to enable content moderation. And even more than that, our punitive is useful just beyond this metadata private setting and also enables moderation in the setting where the moderator is decoupled from the messaging platform. For example, in federated messaging services or in communities that span across messaging platforms. So we provide formal accountability and deniability security notions for the content moderation setting and report on the design and evaluation of a construction inspired by designated verifier signatures. To get started, let's back up and look at the existing tools supporting cryptographic verification for content moderation. This prior work coined message franking is a technique originally built for Facebook secret messenger, which supports content moderation in non-medadata private settings. To provide a little bit more detail, recall that in modern end-to-end encryption, Alice and Bob generate a per message shared secret key via some key agreement protocol, such as signals triple Diffie-Hellman handshake. And then that symmetric key is used to encrypt the plain text. In the non-medadata private setting, Alice then sends the ciphertext to the platform over a channel that's authenticated and bound to her identity. So this means that the platform knows the ciphertext is being sent by Alice and that it should be delivered to Bob. For example, on Facebook, Alice is logged into her Facebook account. So in message franking then, the platform simply stores this identity information along with the ciphertext. To report a message, Bob will provide the symmetric key material and then the platform can use that key to decrypt the ciphertext to learn what message Alice sent Bob. And this all works fine as long as the underlying symmetric encryption algorithm that's used is committing, meaning that Bob can't provide a key that opens the ciphertext to a second valid plain text. And so actually this is still just a little bit of a simplification of the message franking solution, but it illustrates the basic idea. Unfortunately, the solution doesn't work when we consider the metadata private setting. So in the metadata private setting, while the plain text is still bound to the ciphertext, the identities no longer are. So the moderator doesn't learn who sent the message. At this point, there seems like there might be an easy fix. Can we bind in Alice's identity using the same committing encryption strategy? Bob can then verify the bound identity is correct and the moderator will use the bound identity to attribute the message. Unfortunately, there's a subtle attack here in which a third user, Charlie, can collude with Bob to frame Alice as having sent a message she didn't actually send. And since the platform is metadata private, the moderator can't know the message originally came from Charlie and not from Alice. And so we consider this a break of accountability. And at its core, the problem with these approaches that rely simply on symmetric techniques is that Alice's identity isn't cryptographically bound to the message content. We'll need to use a different strategy for the metadata privacy setting or more generally for settings where the moderator doesn't learn the sender's identity. So at a high level, our solution of asymmetric message franking is a specialized digital signature scheme that reintroduces the sender identity binding using public key identities. And while at the same time, simultaneously achieving the accountability and deniability goals of our content moderation setting. So then to send a message, Alice will first sign the message using her secret key and then encrypt the message and signature under the end-to-end encryption symmetric key. So our message franking protocol is gonna be decoupled from the end-to-end encryption algorithm. Bob can then choose to report the message by sending the message and signature to the moderator. So you might notice right away that accountability is easily satisfied using just standard digital signatures. Both the moderator and Bob can verify that a message is from Alice. However, this same verification algorithm can be run by anyone. And so thus Alice's messages don't have any deniability. So we're gonna need to use other approaches. So as a starting point for our solution, both technically and conceptually, let's consider designated verifier signatures. So designated verifier signatures are this cool technique in which the signer can designate a specific party and only that specific party to be able to verify the signature. Accountability is met since the designated verifier can't be fooled by forgeries. But there also exists a forging algorithm that even without the secret signing key can generate a fake signature that's indistinguishable from a real signature any party that isn't the designated verifier. And this provides deniability in the cryptographic sense in that there's no cryptographic reason for anyone to believe that a signature is authentic because it could have just as easily have been a forgery. And this type of deniability seems particularly important in the wake of recent data breaches in which breach signatures were used as cryptographic evidence of the breaches authenticity. So now let's consider using designated verifiers signatures in the following way. Let's have Alice sign her messages with the moderator as the designated verifier. So we'll add a public key to the moderator which Alice will use to sign her message. And then the moderator can verify the message using their own secret key. And then by the designated verifier property of the signature others can't be convinced that a message is from Alice since it could have just as easily been from a forgery. However, unfortunately by the same exact property Bob also isn't able to check whether the signature will be accepted by the moderator. And so that means that Alice can evade moderation by sending Bob bad signatures. So to complete the accountability picture we'll still need something more. Specifically, we need some way for Bob to be able to check whether the signature will be accepted by the moderator. So then in asymmetric message franking we do this by adding Bob as a designated verifier of a proof that the signature to the moderator will accept. So what does that look like? Now all three parties will get a public key. Alice signs messages specifying the public keys of the two verifying parties which are the receiver Bob and the moderator. The first verification algorithm is run by the recipient to verify that the signature will be accepted by the moderator and the second verification algorithm which we call judge allows the moderator to attribute a message to the sender. Informally then our accountability notions restrict how these three algorithms interact with each other. Receiver binding means Bob is bound to only be able to report messages that he actually received from Alice. He can't frame Alice by generating a message that passes judge that she didn't actually send. And in sender binding this means that Alice is bound to messages that she actually sends to Bob and that messages she sent can be reported and traced back to her. She can't send Bob a message that Bob will accept through his verify algorithm and then will later fail the moderator's judge algorithm. So that covers our desired accountability goals but what about deniability? For this, we reason about deniability by thinking about who can create forgeries that are indistinguishable from real signatures. For example, if we consider our earlier scenario where Bob is leaking Alice's messages to the public. If there existed a foraging algorithm that Bob could run using his secret key that produces a signature indistinguishable to Alice's real signature, then the public would have no reason to believe the message came from Alice because it could have just as easily been from Bob's forgery. So we want the public who only has access to public keys to be unable to distinguish between these two worlds. One that uses Alice's real signature and one that is Bob's forgery. Another scenario we might care about is if the moderator leaks Alice's messages. Here we'd require a foraging algorithm that takes the secret key of the moderator. And to make things even more complicated, what about when the moderator or the recipient are compromised and their key becomes public? We may want to maintain deniability even in the event of certain key compromised scenarios. So to capture key compromise, we would want the distinguisher to not be able to distinguish even when given secret key material. And so you can see that when considering all of these key compromised scenarios, this quickly blows up into combinatorially many possible forging relationships. Some of these forging relationships have interesting implications. To give an example relating to the previous presentation, consider the relationship where a forger with only public keys fools the distinguisher that even holds Alice's secret key. This implies that when maintaining deniability, Alice won't be pressured to reveal her secret key in order to show her innocence, since her key's not gonna be of any help anyways. In other words, this forging relationship implies that Alice doesn't have the ability to repudiate messages. On the other hand, other relationships contradict directly with our accountability goals. For example, if the receiver Bob was able to create a forgery that fooled the moderator, then it would be a direct violation of our receiver binding property, allowing Bob to frame Alice for messages she didn't actually send. So ultimately, we explored this whole, this vast deniability space. In this table, I'm showing the keys available to the forger on the left side and the keys available to the distinguisher on the top. And you can see that some of the forging relationships are explicitly ruled out by our chosen accountability goals, which I'm depicting with these little symbols. We ended up picking a set of three deniability relationships corresponding to forging with no secret keys, forging when the receiver's key is compromised, and forging when the moderator's key is compromised. So as an added benefit, our chosen deniability targets imply the rest of the forging relationships that aren't ruled out directly by our accountability goals, as shown by the blue shading on this table. So the choices that we made represent only one possible design point in this deniability space. There are many other reasonable trade-offs that can be made depending on your desired accountability goals. Supporting reputable signatures would be such an example as motivated by the previous talk. Okay, so to summarize, so far we've seen the accountability and deniability goals for asymmetric message franking, but not how to actually build a scheme that achieves them. This was challenging because we have so many security properties that we need to balance. So next, let me give you a high-level overview of our AMF construction. Our construction consists of a proof of knowledge of a carefully chosen expression of discrete log relationships, which relate the public keys of the parties involved. The signature proof of knowledge notation we'll use to specify proofs of discrete log relationships follows the Kamenich-Stadler style. So for example, this is the notation for a standard Schnorr digital signature that's proving knowledge of Alice's secret key. This relationship, as well as those that are used in our construction, can be proved using an interactive signal protocol with a three-phase commitment challenge response. And then we can turn this interactive protocol into a signature using the standard Fiat-Chemir transform by generating the challenge using a hash function that also binds in the message. And so with that, here's our construction. And at a high level, as per our earlier intuition, the construction is made up of two components. The first clause is kind of an embedded designated verifier signature to the moderator. And the second clause corresponds to a designated verifier proof to the receiver that the signature will accept. So now let's look at what makes up the designated verifier signature to the moderator. It consists of a disjunction of two clauses. The first clause is what Alice proves to the moderator. And so you can notice that it mirrors the Schnorr signature that I put up earlier, in which Alice is simply proving knowledge of her secret key. And the second clause is what allows other parties without knowledge of Alice's secret key to forge. So other parties will construct this J value in different manners. The moderator will only accept the signature if the J value is constructed in a very special way. Specifically, if it's constructed as a Diffie-Hellman triple with its own public key, the moderator's public key, and this ephemeral EJ value. And so assuming the ephemeral information that constructs EJ is discarded, only the moderator is able to check the validity of this Diffie-Hellman triple. And it can do so by using its secret key. And so this is what provides the designated verifier property. So zooming back out to the full expression, and then looking at the designated verifier proof to Bob, we see that it's constructed in the same form. The first clause is what Alice is proving to Bob, and the second clause is what allows for forgeries. And so remember the goal of the designated verifier proof to Bob was for Alice to prove that the designated verifier signature to the moderator will accept. And we can see that that's exactly what she's doing here. She's proving to Bob the Diffie-Hellman relationship of J, EJ, and the public key of the moderator. Okay, so in summary, our construction provides a way for the moderator to attribute a signature to Alice, for Bob to verify that a signature will be properly attributed, and provides a variety of different forging algorithms for various key compromise scenarios that gives Alice a level of deniability. For more details on how these different forgeries are created, please take a look at the paper. And the paper also has the formal theorem statements and proofs showing our construction meets our desired security properties. And so although this proof of knowledge seems large, it's actually made up of very standard discrete log proofs that can be implemented quite efficiently. We implement our construction in Python and using the Petlib library and find that our solution leads to short signatures for signing and fast verification times, well within the bounds of what we think is practical. And we also wanted to see how this would work on existing platforms. Recall that since asymmetric message franking doesn't require metadata, we can add it as a third party service on top of legacy systems. So we provide a proof of concept integration with Twitter private messages that uses key base to manage the public key identity bindings for Twitter identities. And then in our integration, the service also uses a machine learning system called prospective to automatically score message toxicity levels. And so we think this proof of concept provides a pretty good example of the types of things you can do with asymmetric message franking. So in conclusion, we introduced asymmetric message franking, a new primitive for cryptographic content moderation in the metadata private setting. We introduced definitions and strategies to formally reason about the trade-offs between deniability and accountability in this space. And lastly, we gave a construction conceptually based on designated verifier signatures. Thank you, I'm happy to take any questions. All right, thank you. We'll have time for a few questions. Again, please come to the mics. Have you looked at the standard model constructions, maybe possibly inefficient, just from like, I don't know, generic signatures no attractiveness around them? We haven't gone in that direction, no. And just to check, you have a single scheme that satisfies simultaneously all three kind of deniability properties. That's right, those three different ones that those three targets that we chose. All right, and any further questions? If not, let's thank Niran again. Thank you. Let's go to session. Thank you.