 is from two lovely gentlemen, Dave and Lief. In the far plan, you might have seen Moe, but that changed. So we have this talk by David Stanton and Lief. And the title is Introduction to Mixed Networks and Cuts and Posts. And this is about a new anonymity movement and how Mixed Networks actually work. I have no clue, so I'm thrilled. Hopefully you are as well. And here are Lief and Dave. OK, I'm going to give a brief introduction. David has a lot to say about Mixed Networks. I'm going to talk more generally about anonymous communication first to sort of give some context. And the first question I want to ask and answer is, what does the word anonymous really mean? If we look at the dictionary, of course, the good source of truth here, it shows us that anonymous means without a name. It has a few different definitions there. None of these really apply to privacy enhancing technologies, anonymous communication technologies. The closest thing in this definition is actually about Wikipedia, which is it says anonymous contributors of Wikipedia. So edits on Wikipedia that are not logged in are often called anonymous edits. Now they call them IP edits, which is more accurate. But a lot of people still call them anonymous edits. And those edits are much worse for privacy in a lot of regards than if you were to log into Wikipedia. You could have multiple pseudonyms. And if you do that, then your edits are not linkable to each other, which I'll talk more about in a moment. So oh, wrong computer. This is an interesting project you find on GitHub at github.com slash edsu slash anon that tracks so-called anonymous edits on Wikipedia and identifies who's doing them. And there's a thing called Congress edits, which showed edits made by people in the US Congress and lots of other places as well. So let's see. So when people talk about anonymity in the context of privacy enhancing technologies, they're really usually talking about unlinkability in some form, which is a lot more precise but still too vague, because it doesn't define what is unlinkable to who. In the literature, you can find a lot of more specific concepts, like a sender anonymity, receiver anonymity, location anonymity, third party anonymity. These are all about preventing people from linking who is talking to who or who sent a message. Then there's some more interesting properties, like sender unobservability and receiver unobservability, which is to avoid having somebody be able to observe that a user has sent a message at all, regardless of who it's to. All of these are about making events unlinkable from the perspective of some adversaries with specific capabilities. So we talk about those capabilities in threat models and threat models should define adversary capabilities and say what a given tool is trying to protect against. Sometimes they're not well-defined and sometimes they are very well-defined but they're not well understood by the users that rely on the software. I think when looking at threat models, it's useful to invert them. Instead of thinking about the adversaries that it is protecting against, consider the ones that it doesn't protect against. So if you have an example of somebody is editing Wikipedia, they post an edit that somebody doesn't like, who can tell, who can link that edit to another edit that the user has made or link it to an IP address? There's a couple of different adversaries that clearly can. There's the operators of Wikipedia and there's people who are able to observe the user's traffic to the site. Wikipedia is encrypted, of course, but if somebody at your ISP or say you make his edit from work and it's about your company and they're looking at this edit, they don't like it, they wonder if somebody at the company did it, they can see who, if they have NetFlow data that records all the TCP connections, they can see who sent that amount of data to Wikipedia at that time and they can clearly say this user probably made that edit. So those are two very different adversaries, the site admin and anybody able to observe the user can do that because they can see the other end as well because that's public if you're making a public post. Also, anybody who could compromise one of those two groups, the site admins or somebody can observe the user. So the most well-known anonymity tool today is Tor, which I'm sure everybody here has probably heard of. There's pictures like this that show how Tor works. There's actually layers of encryption on each of those green lines. So David made another picture here that shows the layers of encryption. So you pick a path through the Tor network and you connect to the first hop there, the entry guard and extend to the middle and extend to the exit and the middle can only see the traffic's coming from the entry and going to the exit. It doesn't know the client or the destination and likewise the entry guard knows about the client in the middle but not the exit or the destination. So Tor's threat model is somewhat well-known is their 2004 USENIX paper, oops, yeah. This is the first paragraph of the Tor threat model, which says that a global passive adversary is the most commonly assumed threat when analyzing theoretical anonymity designs, which at the time this was written, mixed nets were something that were being researched a lot and the Tor network was very new then and they say instead of trying to protect against that, we're going to try to achieve more by having a lower latency system. We'll talk about the properties of mixed nets and why they made these trade-offs. A global passive adversary at the time was considered to be pretty unlikely. It was kind of a theoretical thing. Of course, today we know that there are entities that observe all traffic passing by lots of different points around the world and they're not even just passive, they do active attacks as well. And unfortunately, there isn't a general purpose anonymity system that's designed to protect against those sort of adversaries still today. But even worse than that, there's much weaker adversaries that can also often do what Tor is trying to prevent them from doing. The second paragraph of the threat model explains that actually it's not just global passive adversaries, but people who can just see both ends of the connection. They could confirm that that's happening. And in the case of like posting a public message, that's, everybody can see one end of the connection. You can see when a blog comment was posted or when a Wikipedia edit was made or so on. So going back to the example earlier with those two different adversaries, the site admins or somebody who's monitoring your traffic, like your ISP or somebody on the Wi-Fi, your employer, a university, they could actually in some scenarios still tell who made that comment on a blog, even if you're using Tor, even if they don't observe anything except for your connection. If you have a local adversary at your university, your employer. So that's not great. So am I saying like we shouldn't use Tor? Actually there was supposed to be another slide in here that said why use Tor. And I've got a lot of reasons why I still use Tor, even though it's not got the strongest threat model. And I recommend people use it for lots of things. The reasons are that it provides location anonymity from sites if they aren't observing your network connection. It provides some browsing privacy from adversaries that do observe your local connection if they aren't observing the sites. A single hop VPN could do both those things, but it's much weaker because the anonymity set is much smaller and the VPN is a single point of failure that could be observed and see all of your source and destinations, link them all together regardless of where you travel to. So I think Tor is, it's not able to defend against somebody that sees both ends of the connection, but it is a lot better than any other alternative for a general purpose anonymity system. The Tor browser also has some great anti-tracking features and there's hidden services. Tor is great. I don't want to sound like I'm saying otherwise. It's just, we'd like to have something stronger. And so that's mixed networks. Mixed networks were proposed, I think, maybe in 1979 or even the paper about it came out in 1981. This is the first paragraph of David Charm's paper, untraceable electronic mail return addresses and digital pseudonyms from 1981. So how do mixed nets work? They're kind of like Tor. They have the layers of encryption, but there's a big difference which is that they are reordering the messages. All the messages are fixed size. So we have four messages. They're going into a mix inside the mix. We don't see what happens for an external observer. There's a layer of encryption that's removed and new messages come out in a different order that are bitwise unlinkable from the inputs. The mix, of course, can link them. So you actually want to have more than one mix. You could think about the whole mix network as a single mix logically, though. So messages go in, messages come out and you can't explain that. So I guess I'm going to turn it over to David here. Okay, so Leif mentioned the threshold mix strategy. I didn't, actually. Well, the threshold mix strategy, say we have a threshold set to four, so this mix wouldn't send any messages until it accumulated four messages. And so that means that these output messages have a 25% chance of being linked with one of the input messages, which is a pretty weak security margin. So if we were going to use this in a public deployment in a real-world scenario, we would want to set the threshold to like 10,000 or a million. We can also use what are called continuous-time mix strategies which don't have concrete bounding on the anonymity set size. For example, we're going to talk about cats and posts. He uses the Poisson mix strategy, which was first published in the Lupix paper, and it's continuous-time mix, so messages come in and out of the mix at random times and users set the delay to a random delay. So continuous-time mix strategies, in this case, would, well, I mean, so let's talk about the architecture. You have to get the public key material of all the mixes to be able to send a nested encrypted packet through the mix network. So Mix Minion, which was a project, before a Tor project, they used the pool mix and they had a PKI system similar to Tor. So cats and posts uses a directory authority system similar to Tor and Mix Minion, but we plan in the future to improve its design. Right now, it's not Byzantine fault tolerant, but it is slightly decentralized in its voting protocol. So once clients gain access to all the connectivity information, all the key material, they can send these nested encrypted packets through the mix network. Okay, so this is a great paper that discusses mix strategies. So there's many different trade-offs for mix strategies. There's performance and anonymity trade-offs, and this paper takes a functional look at the different trade-offs for various mix strategies, but one thing they all have in common is they all add latency. And so, whereas Tor tries to route messages through the network as quickly as possible. And actually, that might not be the most accurate way to say it, because Tor actually routes cells, which are pieces of a stream. So I want to discuss some attacks and some defenses, and we have a lot to cover, and I thought I would do it by talking real fast. That's kind of a talking strategy to get all the information in here. So, but most of the attacks we're gonna talk today are about all communication networks. They don't just apply to mix networks. All of these attacks apply to Tor and other anonymity systems as well. So it's pretty useful, but this N minus one attack is really specific to mix networks. And to briefly describe it, if we have this threshold mix and the adversary controls some routers upstream or downstream, but they don't control the mix itself, if they see the target message enter the mix, they can always just send their own messages into the mix and make sure it hits its threshold so the messages are shuffled and sent out. And they know what their own messages will look like coming out, but so the one message they don't recognize is obviously the target message. So they just trace the message through one hop through the network. And this attack would have to be repeated through all the hops in the route in order to completely compromise the unlinking between sender and receiver. So that's one example of an attack and it's called an N minus one attack, but there's many variations on this attack. And this is a pretty good paper to read to get a kind of overview on how to apply it to different types of mixed strategies. So Anya Pachoska's main outfit behind the Lupix anonymity system, Katzenpost is based on a lot of the design from her paper. So which includes the Poisson mix strategy, this idea that we can make decoy traffic trade-off with latency. So we can make the latency a bit lower if we're willing to exchange some bandwidth overhead and send decoy messages. But there's- I could interrupt for a moment. Something that I neglected to say here was that the original MixNet projects that were deployed in the late 90s, early 2000s had extremely high latencies and we're sending email only, which made them not very useful and thus not a very big anonymity set. And the renaissance of MixNet research in the last decade or so, or maybe a little more in the decade, has sort of arrived at thinking that maybe we could actually have much lower latency while still having very strong anonymity. So that's where the work is today. Yeah, do you wanna mention the anonymity trailer? I think you have that later. But yeah, there's this paper about the anonymity trial emma, which there's a big oversimplification of it, says there's three things you'd like, really strong anonymity, low latency and low bandwidth overhead. And Tor is kind of on the low bandwidth overhead, low latency side, so the anonymity is not as strong. And the new thinking is that you could have a bit more bandwidth overhead of sending a lot of cover traffic and have reasonably low latencies still with much stronger anonymity. Yeah, so in the Lupix model, we, let's see, let's not talk about N minus one attacks for Poisson mix strategy in this talk. So there's various types of cascade, various types of topologies for MixNets and David Choms first paper published in 1981 covers this cascade topology. But one interesting thing, difference between Tor and MixNets is Tor has to have lots of relays and you have to have many, a big route permutation space, because you want route unpredictability so that your adversaries can't predict your route. And for MixNets, we just don't need route unpredictability. Everyone could be routing through the same four hops and it would work fine. Except it wouldn't scale, so you need more so it can scale. But if there's four hops, you could still provide strong anonymity for however many users they have capacity for. Yeah, so it doesn't scale, it doesn't have high availability. And so 10 years ago or so, people thought that free route was a good alternative to cascade because you at least have high availability. If one of these notes in the network has an outage, you can route around it. So free route is a topology where any, you can make a path that goes from any relay to any other relay. You can completely pick your path freely, hence free route. Yeah, so one of the downsides is that actually if Alice is sending message to Bob and Bob is sending message to Alice and they're using the same three relays, but in a different order, if the message is, they're gonna intersect on one of these mixes and when they do, the source of the message will be a distinguishing characteristic. So the message won't actually be mixed. What we mean to say is that the anonymity sets on the mixes will be split and there'll be smaller sets, so smaller security margin. And so these academics came up with the stratopupfied topology, also known as the layered topology. And so here we have a diagram with three layers, layer one, layer two, layer three, and they can only send to the next layer, to the right. And in Katzenpost and in Lupix, we have providers at the beginning and end of the route. Providers are a superset of a mix. They allow message queuing for later retrieval and network services that you can interact with. But there's other approaches as well. If you think about it, if there was a compromise mix on each of these layers, then the more messages you send, you would choose a new route for every message you send and so you would be increasing the probability of eventually choosing a bad route. So in our threat model, a bad route is defined to be a route in which every mix in your entire route is compromised. Then it's game over, right? So if we have one honest mix in your route, you still have this unlinking property between input and output messages. So if we could instead, another approach to dealing with that sort of compulsion attack threat model is to have various cascades be distributed by the PKI so clients can choose whichever one they want. So when you allow clients to choose routes, though, you need to make them not distinguishable characteristic for that one client. Otherwise, it could be fingerprinting attacks. So this is why a gossip protocol is probably not good to distribute information about the network. This is why we have a PKI that distributes the consensus document that covers the entire network to all the users. So that everybody has uniform knowledge about the current state of the network and one user's choices about their path selection will be indistinguishable from other users. Yeah, and this not only applies to mixed nets, but it applies to Tor and I2P and all anonymous communication systems have the potential to have route fingerprinting attacks, which we would like to avoid. But I feel like the most important attack category that we want to protect against is statistical disclosure attacks. And all types of statistical disclosure attacks are important to have some defense against, but mainly we are trying to provide partial defense against long-term intersection attacks. So we can extract away the entire mixed network as if it's a single mix, a single router, where there's some input messages and some output messages. So if, for example, Alice 1 goes offline, Bob 1 and 2 might be observed to receive 20% fewer messages. So if that's the case, it's now obvious that some metadata was leaked, right? Alice 1 was previously sending the messages when she was online. So this statistical disclosure attack applies to all communication systems. And so all our communication systems leak some amount of metadata. An expanded version of this diagram is this. So if clients are receiving messages directly from the mixed network, then a passive adversary on the right, the vertical line represents a part of the network they're viewing, right? They can see all the clients send messages into the network and all the messages come out. And they can make these long-term statistical predictions or sort of assumptions about which client is talking to who. And however, if we replace the edge of the network with providers that have many cues for each client, then this statistical information that's leaking here on the right is a lot less specific about which client. So if each provider has, say, 10,000 message cues for, like, 10,000 other users, then this would be leaking a lot less information. So Katzenpost also has clients retrieve messages later from providers, and they use a traffic-padded protocol to do so. So each of these clients receives the same amount of information. Jean-Paul, in this example, he has one message in his cue, and it's indistinguishable from the other messages, from the other retrievals. So that's a little bit about statistical disclosure text. There's actually a lot of literature about it. So we use the Sphinx packet format. It's specifically designed for decryption mix networks, but it has also been used in the design of low-latency systems, like Hornet. So Sphinx was designed as a drop-in replacement for the packet format in Mix Minion. It never was deployed as such. We have a slightly modern version of it in Katzenpost, where we use newer cryptographic primitives, and I don't have time to talk about all the details, but basically, since we don't have an interactive bidirectional communication channel, we're using these Sphinx packets that are being transformed as they traverse the network. We don't have forward secrecy properties, so we are more vulnerable to compulsion attacks. Actually, this is interesting because Tor is in this one particular threat model, Tor is a bit safer than Mix Networks. So when we talk about overlaying networks, we usually refer to the wire protocol that you're actually talking to the machines over as a link layer, right? This is connecting the clients to the mixes and the mixes to the other mixes, and in Tor, they use TLS, and in our Mix Network, we use a noise-based cryptographic protocol, and if we just ignore that we have this cryptographic link layer for a minute, if you were to grab some Tor ciphertext from like an End Tor handshake, and you wanted to get the police to compel, legally compel a Tor relay operator to decrypt it so you can find out where the next hop is, if you were to try to do that with Tor, it would be difficult to make such an attack successful since you have these firmware keys being destroyed every few minutes by every hop in the circuit. And in Mix Networks, there is some opportunity to perform these attacks with legal action, police raids, or just compromising the mixes in the route. I want to clarify a little bit of what the compulsion attack here is possible because mixes have to use a fixed key. You don't set up a session like in Tor where you create a stream, you extend to each hop. So before you actually send your request through Tor, your HTTP request or whatever you're sending, you're doing a ephemeral Diffie Hellman with each of the intermediate hops. You have key material that can be thrown away at the end of that stream or that circuit. And in Mix Networks, you don't have this state. It's each message is an independent thing. So you can't throw those keys away because you learn them and you have to use the keys that are the current key. So instead, the keys are rotated out some key rotation interval. And during that interval, this compulsion attack is possible where somebody could go through, find a message they want to de-anonymize and ask each hop, say, or tell them, tell each hop you have to decrypt this or will kill you or something. So that's the compulsion attack. Yeah, so key erasure is the main defense we have. In 2002, George published a paper, Forward Secure Mixes, where you can interact with the specific mix key which is destroyed immediately after your packet traverses that route. However, you're leaking extra metadata to that mix. So you're saying, I'm the same entity as last time that interacted with you, and now I'm doing it again and you're destroying the key for me. Thanks. So this is kind of a trade-off because on the one hand, you're leaking extra metadata to this one mix. On the other hand, it destroys the key sooner. In cats and posts, currently with the key rotation schedules to set to three hours, but we plan to soon set it to much lower, maybe under an hour. And so this is our main defense, this key erasure. And there are other partial defenses against compulsion attacks. Here's another paper about it that's pretty interesting. It's got deniable routing, multi-path routing steps, and things like that. So I don't wanna go into any detail about these, but I just wanna mention that there are other avenues of thought regarding mixed net design. And so Amir Hertzberg is one of the authors of these two papers, and they have a different kind of strategy. I mentioned the multi-cascade topology before, and so it's somewhat related to the compulsion attack in the sense that, like I mentioned before, if you send your messages through the mixed network with a new route for each hop, you don't wanna necessarily increase the probability of choosing a bad route. So you could use one route for a while, and then switch your route. And so our mixed network's an overlay network, and so what we mean by that is we're using IP, right? We're using IPv4, we're using TCP, and we're adding some protocol layers on top of that. So we can have a custom automatic repeat request, error correction scheme to retransmit messages if they get lost in the mixed network, and things like that. And so Trevor Perrin helped us a bit. Yawning Angel designed our wire protocol. It's based on the noise cryptographic protocol framework. Peter Schwabba also was very helpful in communicating about New Hope Simple and Kiber, and was involved in creating the post-quantum key encapsulation mechanisms that we use. So we have a post-quantum cryptographic link layer, which I think is pretty cool, thanks to Yawning Angel and these other people. And so this is the anonymity trial emma we mentioned before. There's this trade-off, inherent trade-off between latency bandwidth and the strength of anonymity that is offered by the system. So in the, I just wanna mention, in the Katzen post, or in the Lubix design in general, using these Poisson mix strategy on the mixes, clients aggregate several Poisson processes, and they send out traffic, which has legitimate traffic mixed with decoy messages. And the reason they do this is so that you can tune the mixed network by modifying parameters in our PKI. So it's distributed to all the clients, and the clients can set their traffic to these parameters, and then everyone's traffic more or less looks the same. And there's a kind of trade-off between how much decoy traffic is sent and how much latency is there, and then the strength of the anonymity. So the tuning problem is, so here's an example of the drop message. Alice chooses a random provider and the network to send it to, and the provider sees that it's a drop message, so it drops it. So it's not fooling an adversary that's on the destination provider. It's only indistinguishable to a passive adversary watching the network, in this case. Or so, and here's a loop message. So a loop message in Katzen post is sent to this loop service on the destination provider, and Alice uses this single use reply block. It's a mechanism within the packet format that allows anonymous replies. So the reply is sent to Alice without the provider knowing Alice's location on the network. I think maybe a little more should be said on this slide here of the single use reply block. The light blue line there, the reply, that's a route that Alice has chosen and sent that to the provider at the bottom left. So they can't see what the return route will be. Alice picked that and gave them this serve, a single use reply block, which is sort of like a self-addressed stamped envelope that says, hey, this is how you reply to me. And so this is the mechanism for anonymous replies. But these are only good for the duration of the key rotation interval, which is this trade off about how long you have to do a compulsion attack. So serves are in older mixed net designs are actually used for human replies. But in our current design, they're more for automatic replies from services where you're not gonna use a serve to reply the next day. It's gonna be used soon within the same key rotation interval. Right, I should also, that also reminds me what Leaf said is, since we're using serves with the Poisson mix strategy, the client actually Alice in this case chooses the delays, all the forward delays and all the reply delays. And so she actually knows the rough estimate of the round trip time or fairly accurate unless there's computational overhead at each hop or so, but she knows the artificial delay placed by each mix for the round trip, which is helpful so that if she doesn't get the reply, she knows it was lost in the network. And so there's, let's see, drops loops and Lambda P could be a forward message or perhaps a loop if the queue is empty. Or so here we have another diagram where Alice is retrieving a message from a rendezvous provider. So instead of rendezvousing on the provider you directly connect to, you wanna exchange messages using somewhere else on the network because you want some unlinkability property. You want to retrieve your messages using the mixed network as communication transport to retrieve the messages. So I mean, it's a fairly simple design and it's different than the Lupix paper. The Lupix paper is not actually presenting us with an anonymous communication system in the sense that Leif is talking about where we have location hiding properties. Here we in the sense want a kind of mutual distrust property where if Alice and Bob are talking to each other and if Alice is compromised, she shouldn't learn Bob's location on the network or the adversary compromising Alice shouldn't learn Bob's location. And so they can use these intermediaries. What I wanted to explain really quickly is that if Alice's behavior is predictable there's an active confirmation attack. So if she retrieves a message from this provider, this rendezvous provider on the network and it's compromised, then the adversary could make an outage on the network so that the reply might not ever get to Alice. And so if Alice then sends the reply again it might use the same sequence number or so and so the adversary just had a positive confirmation that Alice is on a certain half of the network, right? Because they've created an outage for half of the nodes that can receive the reply. So the adversary gets to learn which half she's on and so if you have some background in computer science you can easily see this attack would happen in logarithmic time. That's basically like a binary tree search. And so instead we want to randomize the retransmission delay. This might create usability problems but we can't have client behavior predictable. And so cats and posts, internally it's a series of queues connected in a pipeline but providers can have services and we have a plugin system so you can add services to the network. So we really wanna collaborate with other developers and academics as well. And if developers would be interested in creating new messaging systems, we support that. We have messaging systems that are usable now on cats and posts but we also want to support making new messaging systems. So it could also be used to transmit cryptocurrency transactions obviously because that's just a message. And so Anya, the author of Lupix also wrote a paper about it and you would want other people transmitting their cryptocurrency transactions. You wanna hide in a crowd of other people doing the same thing. Mixing other services on the same network wouldn't really create this end-to-end unlinkability property you want. You're only really being mixed with the other people in the network. So you might be thinking why does Zcash need this? Isn't it already anonymous? And Zcash, the anonymity that it provides keeps you from linking transactions together but it doesn't provide the sender unobservability and that's what the MixNet can provide. So Zcash sort of like many things could potentially have a statistical disclosure attacks against it where you see when somebody sends a message and that's a useful piece of information. And if you're a MixNet Zcash client people can see that you're connected to the network but they couldn't see if you're sending right now. Okay, so what's next? I think maybe we need to skip some of these slides if we wanna have time for questions. Or no. Okay. Right, yeah. These slides are a bit too much detail but after them I have some paper recommendations. Hold on. Let's just skip all these things, you don't need it. The moral character of cryptographic works by Fulop Rogueway is a really excellent essay and we want other cryptographers and computer scientists and people like that to think more about helping society and not just furthering their academic careers and publishing papers that have no meaning. So we'd like cryptographers to collaborate with us and with other computer scientists and to do practical things that would affect society positively. He mentions Chom 81, Chom's 1981 MixNet paper 13 times in this paper and it's really what he's trying to point out is that we should really think about protecting metadata that's leaked. Cryptographers focus too much on confidentiality or on these zero-knowledge proof systems or whatever fancy thing they're working on but they don't focus enough on protecting the metadata that we're leaking. I also wanted to mention the authors of the anonymity trial limit paper wrote a technical report which is published on their website and it is called Beyond MixNets and it talks about hybrid networks that they're pretty interesting, have some interesting performance security properties that seem to be slightly better than mixed networks. You can enhance a mixed network using a hybrid network scheme by adding secret sharing, offline secret sharing. And I think this paper is pretty cool, privacy notions. It's pretty interesting. It talks about threat models for anonymous communication networks and what privacy notions do you expect from these systems and how exactly to articulate that. So that's our talk. Is there any questions? Before questions, I'll answer a couple of questions I anticipate is what's the state of the project today? Katzenpost, David and other people have done a lot of work on it. There's code you can run. There's a test network that is run by David. So it's not providing real anonymity properties. It's not a thing you should start using today unless you want to hack on it and play with it. There's a simple text-based client. There's a lot of work left to do but it's being very actively developed. Another frequent question is what is this delay? This lower latency but still not as low as Tor. It's not low enough to type in an SSHL or something but I think David really doesn't like to answer that question because it has to be tuned and there's a lot of decisions to make about how low the latency should be but I think we could say on the order of seconds, not minutes is what we're anticipating is going to be reasonable. The other question I frequently get is about censorship circumvention and yes, our software works with Tor onion services and you can use it with Tor and use plugable transports and Tor has got that stuff covered already. Thank you for this super interesting talk. We do have some more minutes for questions so there are one microphone angel up in the front if you'd like to ask a question which I would like you to do then please step up to this lovely gentleman and talk into his microphone and in the meantime while you do that is there a question from the internet? Dear signal angel, do you have anything for us? There seems to be not the case therefore please go ahead with your question. Sure, you started by saying that Tor's threat model is not for a global passive adversary and it seems that the threat model that cats and posts and mix nets have does cover that and perhaps goes stronger because you talk about compulsion attacks but maybe does not go to what we might say is fully a global active adversary can you sort of draw the line of what you think your threat model is and that boundary? So first of all in the cats and posts model the receiving side of the messages is a provider so if you wanted to do a statistical disclosure attack with a high amount of statistical information being leaked it's helpful to compromise the provider so you can see which message queue receives messages the other active attack for mix nets I can think of is like an N minus one attack where you're attacking the mixed strategy and we have some partial defenses against that but it's kind of still in, it's not a full defense. One aspect of the Lupix design is that the individual mixes are sending loops and so they could potentially detect if there's an N minus one attack happening if messages that they send through a certain relay aren't actually ever making it back to them they can see that that's happening and what to do about that is, well kind of an open question but the PKI operators, the people the equivalent of toward directory authorities could have some policy decisions to say we're not gonna use that relay anymore because their ISP is facilitating some sort of attack. Yeah, I think the- I mean the compulsion attack is the other active attack, right? So mixed networks, we still get to have these security properties even if we only have one honest hop in our route so that's a kind of defense by design, so to speak. Yes, please go ahead. There's just a quick remark about the threat models that threat models can be like put in strictly stronger and strictly, well weaker which was kind of the question before, right? I think if that clears anything up but maybe David can say something about that. About stronger versus weaker threat models, I think. Yeah, because the last question was also about compulsion attacks that TOR is better against. Okay, it's almost an unfair comparison because TOR is easily broken by much weaker adversaries and all these other attacks. It is true it's stronger just for compulsion attacks but I think for mixed nets, since we can have partial mitigation we can make the window of time very small in which the adversary can actually compromise the key material and make use of it. So, and another defense stat is we can put our mixes in different continents and different countries around the world, make the man's job really hard, right? To trace your path through the network. But I think that actually, that point really does kind of illustrate that you can't really say that threat models are necessarily stronger or weaker than one another. There can be better or worse in different dimensions. Yes, please, one more question from the audience microphone or a remark. Just to say because a global passive adversary, there was also some question about that. I mean, the active one usually can inject and delay and stuff like that because you didn't quite get on that one. Thanks a lot for your remarks. David, would you like to? I mean, we've pretty much covered the active adversary when we talk about N minus one attacks and compulsion attacks. I mean, there's some other attacks that can be done. I mean, certainly we don't have a defense against denial of service and things like that. We have partial defense, but yeah. Is there another question? Yeah, hi, I'm wondering what's your opinion on speed and scalability because in my opinion, one of the major drawbacks of something like toys that it's just too slow to use it on a daily basis and it would be much more secure if everybody could use it on a daily basis without having such drawbacks. Okay, well, that makes me think of a lot of different things to respond with. One of them is that mixed networks tend to not be super low latency. They tend to be maybe medium latency. Some of the historical designs are high latency, like Leif mentioned. But we think that mixed networks are very efficient for scaling up. Much more than Tor in the sense that we don't need lots of nodes in our network to scale. We don't need lots of route unpredictability. We just need enough nodes to handle the traffic capacity. And currently we have two public key operations per Sphinx packet and I mean, there's some computational overhead, but it's manageable. So I think mixed networks scale very well, probably to millions of users. And for a different type of application rights, it's probably not gonna be video conferencing. It's probably gonna be lower bandwidth applications, maybe high bandwidth applications would work over longer periods of time. This is not a thing that you're going to browse the web over in real time, exactly. You could use this as a transport for downloading web content. You could have, and that's something that various people are considering, but it's not a replacement for Tor, which ultimately gives you TCP connections where you expect to have a pretty low latency. Another interesting side note is that Tor has exit nodes and generally speaking mixed networks don't have exit nodes to the rest of the internet. We could have services at the edge of the network that perhaps go and retrieve something for you, like maybe some web content or you could have offline browser that. So browse the web the way Richard Stallman does, where you send an email to a process and it replies with a snapshot, you know? Yeah, so yeah, we get to control the entire network if we, and we can put decoy traffic everywhere if we don't exit, right? So it's one of the advantages. Thank you for this in detailed response. Do we now have a question from the internet? That is not the case. So please give a warm round of applause for this very interesting talk to David and Lee. Thank you.