 The next up coming up is going to be practical mixed network designs, strong metadata protection for us in Kronos messaging, held by David, who has done research on mixed networks and is a contributor to the Kronos network, and by Jeff, who has done Kronos contribution to the GNU network project, organized a couple of sessions for this on last year's Congress, and is basically a mathematician trying to get practical. They're going to talk about components on mixed networks and defensives that basically Tor can't do, and yeah. Welcome now to a big round of applause, okay? Okay, so I'm Jeff, this is David. We're going to be telling you some aspects about designing mixed networks. I'm involved with the, I'm an academic involved with the GNU network project. He's involved with the panoramic project. Okay, so first of all, we just, to be clear, of course, encryption works, you know, if it's, you know, properly implemented, and then, you know, we have a huge amount of trust in it. We even have, you know, sort of slides showing that the most powerful adversaries in the world can't, can't, can't break these things. So this is fine. However, we have to worry about sort of, about the metadata leakage, and in this talk we're specifically going to be worrying about traffic analysis of the, of connections. So, yeah, it's time to, it's time to actually start addressing these things. Okay, so existing solutions to traffic analysis. So there's this wonderful Tor, Tor program and project, and they, we, we know as of five years ago, they considered the, the, even the NSA considered, considered Tor to be quite effective at, at preventing mass location tracking. So this is, so Tor works for what it's designed to do. Tor does not protect against an adversary who can see both ends of the Tor circuit. So this, this is, this is a handicap in a number of, in a number of situations. So the first situation is if, if you have a website that is, if you, if you have a website, of course, then somebody can have fingerprinted this website in advance, have some, you know, description of its, of its traffic profile, and they can, and they can tell if, if you're just from looking at your connection, if you're, if you're accessing that, with that website over Tor. So, okay, so let's admit defeat for the web, on the web for now, because we're not going to, you know, we're not going to be able to provide that kind of, we're not going to be able to defeat that kind of adversary very quickly. But, okay, can we just message our friends over Tor? So there's a few programs that do this, there's Ricochet, there's Briar. The problem with using Tor as a messaging, as a messaging transport layer is that frequently the people you want to protect are in the same country or even on the same ISP. So the original property of, you know, the adversary being able to see both sides of the connection comes, comes through again and they can very quickly be, that connection between them can very quickly be seen. So, okay, how can we actually keep our messaging metadata private? And the answer we're going to say is sort of, we're going to say the right one is a mixed network. Yeah, so mixed networks are message oriented, as opposed to stream oriented. They're essentially an unreliable packet switching network. And also latency is added at each hop. This is called a mixed strategy. There's a bunch of different mixed strategies. It's kind of an architectural diagram. Notice there's no exit notes. There's no talking to the web like with Tor. So the security model is different. We do have a PKI similar to Tor. We can call it like a directory authority system. So there's a bunch of differences between Tor and mixed nets. And one of the important ones is that we can actually do decoy traffic everywhere in this diagram, like we can do decoy traffic all the way to clients or to the destination. Yeah, so one of the, one of the issues with Tor is, of course, you can't do, even if you wanted to add decoy traffic, you couldn't hide the, you couldn't protect against this website fingerprinting attack necessarily because you're going to be, you're still seeing the connection coming out the other side. So you're seeing, there's still a lot of analysis you can do. Okay, so one thing, just some history here. Mixed networks are actually the oldest anonymity system as far as I know. They're from David Chum's 1981 paper. Then there's a few other tools that have been proposed. One of them is private information retrieval, usually written PIR. This works in sort of narrow situations when you're trying to retrieve something from some kind of database. The scaling isn't perfect on it, but there's cool things you can do. But there's another, the other, the other one that sort of is generally proposed is the alternative to mixed networks is dining cryptographers networks. And the problem with them is that the bandwidth is really literally, you know, each, you're paying literally for the quadratic cost per user. So on the end, something like cubic. So the, your anonymity set is, is a, is really going to wind up being very small. And if you're talking about building something that has inherently has a small anonymity set, then you have to ask, who are we protecting? And, you know, if you're, if you're not protecting whistleblowers anymore, because if whistleblower, if a whistleblower talks to, you know, a journalist and it's unclear which journalist, you know, Der Spiegel he's talking to, well, he's still, some, he's still the guy with who knew this thing, who talked to somebody at Der Spiegel. So, and whereas it does protect, you know, it does, you know, it, the person that it does protect is somebody who already has a lot of power and who it's going to be hard to convict it anyway. So what we want to do, so we really want to blow up the anonymity set as large as possible. And that's why we like mixed networks. All right. So we're going to talk about a few attacks on mixed networks and some defenses. Epistemic attacks are not one of the attacks we're really going to focus on, because it's really a specialized area of research. There's actually a bunch of, a few papers written on breaking different public key infrastructure systems for like things like point-to-point networks and other, other things like that. So, so, okay, but we can say, I guess we should mention that our PKI, generally mixed literature assumes you have a PKI. It assumes that the, all the clients using it somehow know about the whole network. So, yeah. So, usually when anonymity researchers talk about a PKI, they generally assume something like the Tor directory authority system where you have some people who can be very trusted, who run the thing. This actually presents a scalability problem. It's what's going, it's what's the cuts in this project and, and, and the panoramic is doing. It does present a scalability problem. More serious than the one for Tor. The, there are other ideas you can do. There's, there, so on the, on the idea of sort of making it more secure beyond just these people. There's projects like Cotority and things. And on the, but I'm trying to make it more scalable. There's other things like we have, we have some people in the GnomeNet project that are researching this. In past, generally these peer-to-peer networking projects to try and come up with, you know, distributed PKIs had very serious attacks against them. These epistemic, and especially these epistemic attack type things. So, and you're not going to completely fix those. So, the way that you would have a distributed PKI is you would have to prove that you really know how bad the attack is and then argue that this is better than some nine people or whatever possibly being compromised. But we don't want to talk too much about this because this is not our area of work. But we just want to mention it's, it's a lot of interesting stuff there. And right now, so, since we're leading from the epistemic attacks, David's going to tell you about sort of, since this is sort of the, well, sorry, he's going to tell you about how the scalability comes in. Yeah. So, Tor, oh, so, sorry. MixNet can use cascade topologies where everyone uses the same route. And this is, this is quite a different than Tor, where route unpredictability is used to achieve some of its anonymity properties. So, in MixNet, you can use the same route as everybody, but this is a scalability problem. So, we have other things like free route, and also stratified topology. But free route actually has slightly worse anonymity. Claudia Diaz has got an excellent paper about this. Another kind of point about free route is that in practice, like the Tor network, you visualize it as a free network, and it grew away from that. Like nodes have specific, are authorized to be in specific positions and things like this. So, it may be that free routes aren't where you just, you wouldn't land there anyway, even if you tried. Oh yeah, exit, exit. Versus guard flags for Tor. This is another diagram of the, any layer, any mix and layer or zero can connect to any mix and layer one, and send a mix packet. So, this is, this is sort of, comes from the Lupix design. It's, we're going to be mentioning some more design from Lupix. The cool thing about this is it's fairly easy to calculate the entropy of each mix compared to, say, free route, which is pretty complicated. And this also scales pretty well. We can add mixes to each layer if we need to scale up for more traffic and more users. So, we're going to mention a couple, sometimes we'll put some citations on the slide. Don't take them too, they're not too critical, but though the one on this one, Cloddy Ideas has a very nice paper for understanding the different effects of typologies. And I believe Roger has a paper on this topic as well. Okay, so why isn't this Tor? Well, the main thing that we can say is that Tor doesn't actually mix. If the packets are, you know, the packets coming in at a particular point in time are basically the same packets going out. You pretty much know, you pretty much know, you know, within a very small number. So what a mixed strategy actually does, this is an algorithm that's part of the software to do the thing. What a mixed strategy actually does is it, is it adds latency to reduce these, the, the correlation between packets. And there's, yeah. Okay. Yeah. So the, David Chum in 1981 with this first mix net paper described this, this threshold mix. So say this mix had a threshold of four would accumulate four input messages like this. And when it had enough for its threshold, then it would shuffle them and send them out. Now mixes are also unwrapping a layer of encryption for each of these hops. And so if I was an attacker and I wanted to break this, what I could do is wait until the mix is empty, or I could make the mix empty by sending my own messages into it. And then when a target message enters this mix, I could send my own messages and cause it to achieve its threshold and set, shuffle and send all the messages out. So then I would recognize all the ciphertexts of my own messages and the one message I don't recognize is the target message. And you can keep doing this for each hop. And this is called the n minus one attack or blending attack. And there's a lot of variations on them. We have, we have continuous time mixes, like the stop and go mix and the Poisson mix strategies. And these mix strategies, they allow the client to select the delays for each hop. Usually they're from a, you know, an exponential distribution, but so if, if an attacker wants to break this using a blending attack, they can, first they need to empty the mix queue by blocking all input messages from the mix and waiting some period of time where it's highly probable that the mix queue would then be empty. And then they would allow their one target message to enter the mix and continue to block other input messages and then simply wait for that message to be outputted. Now this, these attacks we have, we have some defense for them, like say a heartbeat protocol from, well, George wrote a paper about 10 years ago, George's neesis. And it's also in the Lupix paper as well, it's mentioned. And so we would have mixes with a kind of decoy traffic called mix. We refer to them as mix loops or heartbeat traffic where a mix is sending itself a message. It's like a self-addressed stamped envelope. It's going through the mix network and coming back. And if it doesn't receive its heartbeat in some timeout, it knows it could be under attack or of course there could be other problems in the network as well. So you would want to maybe correlate an attack with several failures to receive a heartbeat message. There's other defenses for blending attacks as well. There was a recent paper published, but we're not going to talk about that right now. The next category of attack is statistical disclosure attacks. This is essentially, I like to think of it as the adversary is abstracting the entire mix network as if it's one mix. And they're looking at messages go in and messages come out. And a lot of this literature is written from the perspective of point-to-point networks. Like when Alice and Bob are receiving messages from the mix network, they're receiving it at their home IP addresses as if we had publicly routable IP addresses and no NAT devices to get in the way. Maybe a more modern sort of architecture might involve queuing messages. This is a concept used in Lupix design as well. Lupix has got a bunch of different decoy traffic types in order to add noise to the signal at various locations in the network. So there's drop decoy traffic where a client would select a random destination provider to send a message to. So it traverses the mix net and then gets dropped by the provider. And there's also client loops. And actually, I should mention, if we're doing these kind of statistical disclosure attacks, a lot of this stuff, we don't know how well it will work in the real world because it really depends on a specific application and the adversary's ability to predict user's behavior and that behavior should be repetitive. I mean, this depends on how much information is leaked by the system, but mix networks always leak information. So it's about measuring the leakage and understanding if the user behavior is dynamic enough. These attacks don't always, they cannot always converge on success. So it depends on the particular system and how it's tuned. In this particular case, for queuing messages in this style mix network, the adversary would have to compromise the destination providers. So previously, here in this situation, it would be in this point to point network situation where people are actually receiving messages from the mix network to their mailbox directly or to their home IP. The adversary is a passive adversary. And the more modern or architecture where messages are queued, I mean, it's not more modern, but it's the Lupix design, which is a recent paper. So this attack becomes an active attack. And there's some padding to the client. So we have some amount of receiver unobservability. So clients receive the same amount of information when they receive messages. So okay, so there's a question that's natural. Okay, so we've talked about adding latency and we are also talking about adding cover traffic. So you might ask, is this enough? And is there a way I could get away with less? And the answer, could I get away with less, seems to be no. At least by some artificial measures, your anonymity can't really scale better than the cover traffic times the latency. So one takeaway from this is so in the tour, in what is tour situation, so I mean, so Roger always tells people that they don't know if they, if adding cover traffic to tour would help. And one sort of extreme version of this is, of course, whatever cover traffic you add times something very small is still something relatively small. So the, all right, now you'll notice here, of course, the anonymity still looks quadratic in something, but it's still longer in the number of users. So what we're talking about is paying some sort of fixed upfront cost that may be somewhat large. Part of it is in terms of the user's experience with the latency and part of it is in terms of the actual sort of cost of their, you know, of their network connection. But it's, you know, it's doable. So one thing, so okay, so sometimes people have made these, just to sort of wrap up this section about topologies and whatever and strategies and things. So people have made these sort of quasi-religious statements about encryption from time to time. And to sort of boil that down to something concrete, encryption is basically free in general. And, but for the mixed network, for mixed network, we're going to have to actually pay some kind of real costs. Okay. So one thing about mixed networks, you don't want to roll your own packet format. There's this wonderful, first to a very reasonable one, sort of the one that has stopped much of the development in this area is Sphinx. It's quite compact and it has very nice security proof. It's by George Deniz's city in Goldberg. So just a comment on the name. So the packet format has a header in the body. And at the time that it was developed, so the body has to be encrypted with what's called a wide block cipher. And at the time it was developed, the only wide block cipher that people were thinking about was lioness. And there's now some other wide block ciphers like AeC by Rockaway and supposedly DJB has one on the way. So I'm going to say a little few things about the packet format. So the header has three parts, but one of them, the first part, is a public key, this elliptic curve point. And then there's this body which is encrypted with a wide block cipher. So the way sort of the way you think about this mixed node N operating is Alice has, you know, there's this key exchange between the mixed node and Alice that Alice, Alice first does it, she thinks up this key for her packet and does the exchange and then the mixed node computes the other side of it, of that Diffie helmet. And from that the mixed node extracts the next hop and he mutates, he has to mutate all of the different things. So what Sphinx is, is the rules for how to mutate those. Okay, so let's say one thing that's kind of important is why are we using, you know, why is this delta, I didn't make a comment on this too much, but the header part was maxed and delta was not. So why did we not put a mac on delta? This seems very, very dangerous. Because if, you know, if we had, if we were just using an unmaxed stream cipher, then some adversary who controls a mixed node next to the sender, and some place where the message is, and the place where the message is going, could just explore an arbitrary message into the packet and then check for it when it arrives. And, but we don't use a stream cipher, we use a wide block cipher. So what this means is that an attacker doing the same sort of thing will only, will get at most a one bit tagging attack. So, okay. That's still an attack. Why would we tolerate even a one bit tagging attack? And the answer is that anonymous receivers really matter. And so there's a few things that, there's a few things, you know, so of course a journalistic source, some sort of whistleblower or whatever, but also any kind of service, like if you want to talk to some cryptocurrency network or you want to talk to or download some file or anything like this, anything where you interact with a service or you have to, you need some kind of acknowledgement back of it. And in fact, even just the basic protocol acts for a messaging system, need some sort of thing, need some sort of reply. So, okay. So what is this, so how do we do anonymous receivers? We create what's called a single use reply block. So that's a first node where it goes to expiration date and then the header and also one layer, one cryptographic key for, for one layer of it. And so the recipient makes up this, this serve and supplies it to the sender at some point in the past. The sender attaches their delta and they can send to the recipient. Okay, so great. Now, okay. Now let's get into something tricky. We have these, okay, we might worry, so if you looked at the key exchange that I did, the sender just made, Alice, the sender just made up her alpha on the spot, so she got a new, she got a new, her key is ephemeral, but the mix node key wasn't. It was supplied by this PKI. So, all right, that means, so we want our protocols to be forward secure and, you know, TOR is forward secure, it does a live negotiation with each hop, which is great. But we want, so we need that, we need some kind of forward security and we don't have it at Priari. So what we have to do is, well, first of all, a mix net, we need some kind of replay attack protection anyway. So what this requires some sort of data structure that will eventually fill up or overflow or something like this. So to prevent that, we have to do key rotation anyway. So one option is to just rotate the mix node keys faster. The problem with that is that you don't want to stress the PKI too much because the PKI is already a scaling, is already a scaling pain. So, okay. But another problem with that is that these Serb lifetimes are equal to the node key life, they can't exceed the node key lifetimes. So that means that we, if we want to be able to have our forward, have our key compromise window smaller than the node key lifetimes, or then we have to do, or, you know, smaller than the Serb lifetimes and we have to do something else. So there's a couple ideas. So George back in 2000, so, okay, the idea is, okay, maybe we can be like, a little like Tor and use more packets for the packet we want to send, but not do it in the way Tor does it. So George proposed using two packets in different key epochs. That's pretty good. That gives you a lot of nice properties. So there's another thing you can do that I've been working on, which is you can use a loop to a mixed node to actually do a key exchange, and then on the mixed node you can use a double ratchet construction for some hops. And this problem with this is it's cheating these two, these two things. And you wouldn't want to do them at all hops because they create some correlations between packets. So, okay, so we can, so we can, in general, we can ask what is, what do we want the key exchange that our mixed node, you know, what do we want, how do we make this mixed node forward secure? So I don't want to say too much about this, but in general we can talk about the different sort of basic technologies for key exchanges and the properties we can get out of them in the context of Sphinx. And, you know, anything that's based on elliptic curves is not going to be post-quantum. So if we want something based on, you know, if we want that then we need to do something else. So there was a blinding operation in Sphinx. I didn't tell you about doing that in the post-quantum context. It's tricky. Probably it works for SIDH. We don't know if it works for LWE. We certainly have no idea how to do it efficiently. Maybe it can be done. Our cheating strategy gives us nice key erasure properties. It gives us post-quantum if the loop, if the loop did a post-quantum key exchange. And there's another nice property that gives that you can't really get any other way, which is that the blinding thing is hybrid, you can actually have a hybrid post-quantum property. And that means that you can use both an elliptic curve and this post-quantum key exchange. And if either one of them is good then you can't break it. If you try and do this construction with something like LWE, you're probably not going to be able to get that hybrid post-quantum property because the blinding operation itself will depend on the LWE cryptographic assumptions. So, nevertheless, I want to conjecture that LWE may be, so LWE means learning with errors, may be the eventual sort of post-quantum key exchange we want to use. And so mathematicians love conjectures, so I don't think there's one with blinding, but I think we can probably come up with something that eventually where we have some kind of nice blinding for the LWE scheme and it even has puncturing. Punctured encryption is something that you can currently do with pairing-based crypto and it's excruciatingly slow, but I think it could, I suspect it could be done much faster with LWE. Okay. Okay, so mixed networks, they're unreliable, they're packet switching. So in that case, some classical network literature can be applied. Now an automatic repeat request protocol scheme is one of those protocol schemes that has protocol acknowledgments and retransmits. And we can do this over mixed networks, but it leaks extra information. Every accusand could potentially be used as a correlation attack. For instance, if the adversary causes the ag packet to be dropped. In the stopping way ARQ, the simplest variety of these protocols would leak the least amount of information. So that's what we're using. And we have three cryptographic layers in our stack right now in this Lubix cats and posts project we're working on. Yawning Angel wrote a cryptographic link layer based on the noise cryptographic framework. He's mixing new hook simple with X25509 and the key exchange. And we also have a Sphinx cryptographic layer. Sphinx is what Jeff talked about earlier, the cryptographic packet format. And we also have an end-to-end cryptographic messaging. And this is another sort of Lubix style diagram. Alice sends a message to Bob's provider. So it goes through the mixed network to Bob. And Bob can retrieve his message later. And with some relatively simple changes from this Lubix design, we can have stronger location hiding properties where Alice and Bob don't talk directly to the provider that they're retrieving messages from. They can send single-use reply blocks to retrieve messages. This would increase latency. So one thing that's nice, there's a comment to make here, is that a lot of times certain schemes in academia tend to use, want to use PIR for this retrieving, the thing from your provider. And then one of the problems with using a PIR scheme here is that you're going to have very different sort of assumptions that play there. And the way in which you model it is going to be necessarily quite complex. It's probably fun if you're a graduate student playing with all this stuff, but it's actually getting all of everything to match up will be complicated. So this is why, in the scheme that David's talking about here, your mixed net is giving you your location hiding properties, so you can extract some simpler things. Right. And also, whereas in this situation with a Lubix design, it doesn't have strong location hiding properties. In particular, if Alice really wanted to figure out where Bob is, she would hack his provider and then stake it out until his IP address showed up again. So one problem with this, with these provider models is that, like David just said, you can get your provider hacked. And there's a way to fix that. It requires modifying strings a bit. I know that we just said don't roll your own packet format, but it's a good idea to go through the security proof again anyway. And it's a small change. But so the idea is that we have in this middle, this hard drive picture, is some sort of mailbox server or accumulation thing that the receiver here can move whenever he wants without telling his contacts. And his contacts actually reach him in other ways. Either he gives them Serbs or he puts the Serbs at this thing called a crossover point, which I didn't want to tell you too much about. But the idea is that this guy can, our receiver can supply the, he can send some Serbs to this point in the middle and then the, and when he goes online and then it will send him messages. So the, you can have this very, this decoupling. And one of the nice things, so at the end of the day, what the proof, what your like security result for the MixNet is going to be is like, okay, well in three months, you know, they're not going to be able to de-anonymize you in three months. So we may be able to do a bit more if we can move this guy in the middle periodically. Okay, so, but this is work, very much work in progress. It's not at all in the cats and posts thing and it requires modifying Sphinx and doing some, redoing a number of proofs. So, okay, we've been talking about applications with the idea of being messaging. There's other applications and where you're still sending messages, but to give you a bit more, something a bit more concrete, there's a, there's a few schemes for doing anonymous money. Well, right now there's a lot of schemes for doing anonymous money and mostly they suck, but there's a few that are actually quite good and have extremely strong cryptographic assurances on their anonymity. Zcash, you basically would have to invert a hash function or something to break it, I'm not completely sure. Taller, well, in the RSA blind signatures of information theoretically secure blinding, which means they are absolutely unbreakable. There's a point in Taller where it's weaker than that, but another thing you might ask is, you know, can we do anything web like, well, there is a project that wants to like package up web pages and ship them over a free net so you could use it to ship things over a mixed network. But fundamentally, like, if you imagine what we want to do is like build some application that does some collaborative thing, like run something like Google Wave or have a, have an etherpad over a mixed network, you're going to have interesting issues that pop up with like the merges and other things and anyway the latency is going to have other impacts on the users and one of the things we're not really thinking about, but we would really like other people to think about is sort of how to make, how to make people happy with higher latency applications. And this sounds hard, but actually a lot of times like, you know, when you look at people who are developing more modern web frameworks, actually they are doing, you know, more of the abstracting, you know, like something like CouchDB is doing, it's not literally, you know, supporting latency, but it's decoupling things in a way that is quite relevant to what we want to do. So, but it wouldn't be fair for us to say like, hey, use this cool messaging app, it's unreliable, so I'm going to send you a message, but you might not get it. So we want to definitely build in some reliability and you pay for that in retransmission sometimes and some extra leaked information for which we need to compensate with more decoy traffic. We can actually, the Lupix paper explores this trade-off where you can make the latency lower in a mixed network if you are willing to send more decoy traffic and so that should help. Yeah, it would still, it still doesn't make mixed networks I don't think as low latency as Tor or even close, but this is a matter of tuning and we can at least have lower latency mixed networks than say 10 years ago. One of the nice things about, certainly the nice things about the stuff that David and Yawning have been doing is that they're really trying to make the reliability measures work in the mixed, work in the, or just above the mixed network and this is really essential if you want to build something that application developers can use because it is actually common in anonymity systems for the sort of reliability measures to create, to possibly compromise other things. So having, being able to do the reliability stuff in a way that you can still have your security properties for it is important. Okay. Oh yeah, we'd like to say thanks to the researchers we've been working with and I'd like to thank Yawning Angel for all the good design advice and the work on the specifications and, and for George for his advice. George and Claudia are always wonderful. George and Claudia for their excellent paper, Anya for her Lupix paper. Christian, I've, everything that I've been working on, I've talked to Christian about all the time. Nick Matheson from the Tor project helped me out a lot with the, with our PKI specification because, well, I mean he wrote the directory authority system for Mix Minion and for Tor and, and also to Trevor Parran for wondering, running this wonderful mailing list, which where we get all, where we get numbers of important ideas. Oh yeah, and Trevor also helped with our PKI sense. So that, that was really great with our wire protocol using noise. I mean, anyway, and that's, that's the, this new sort of project. All right, that's it. Thank you so much. If you have any questions here in the room, please line up at the microphones. Do we have questions from the internet, from the IRC network? No questions from the IRC. There's one question on microphone one. You mentioned latency will be higher than Tor. Should we be thinking sort of seconds, minutes, what's the sort of order of, we don't know. Oh yeah, so the question is, the latency will be higher than Tor, how, how high will it be? We don't really know until we tune the Mix network and we're not, George just claimed seconds, but I don't know if I should start off by saying that Mix networks aren't trying to be a general purpose and an enmity system like Tor. We're, we're trying to make customized networks for specific applications and so each application has different traffic patterns in different ways they're used. So the latency would, would necessarily come after tuning. Now, some, we have some idea that maybe a few minutes, let's say, but it really, I can't answer the question yet. Actually, the researchers we're working with are about to publish a new paper about how to tune decoy traffic and latency for the desired entropy you want in each mix. Yeah. Microphone number two, your question? Yeah, you have mentioned that the Mix networks, PKI's have higher scalability problems than in Tor. Why is that? It looks like the Mix network will have less nodes because you don't need root unpredictability. So, I mean, if you're trying to build a replacement for email and you want everyone in the world to use it, if you work through like a sort of very bullshit back of the envelope computation, there's an argument that you're, that if you have a central, that a centralized PKI plus whatever other anonymity system is only about 10 million times better than just sending every message to everybody. Something, you know, that's very back of the envelope you can try and work. So, you need, well, okay, so there's that. And the specific scene when I said it's less of a problem for Tor, is that Tor can do certain clever things like there's a, there's one of their proposals, I think he's actually not taking that seriously at the moment, is where they publish this big list, they publish the PKI, or sorry, the big thing, and nodes don't actually download the whole consensus at all. They just point to a place in the consensus and they get back a proof that they were given the correct, that they were forwarded to the correct node. So, this, that gives you another order of magnitude or two on that fact, on that 10 million, I just quoted you. Okay. Big of our damn mic from number three. Hi. This is, it looks like really good work and I'm happy to see it. And my question is, if there are multiple applications which have different tuning requirements, can they share the same network and help each other's anonymity set or do we have to have multiple networks? So, we agree it would be best if they could help each other by increasing each other's anonymity set. But we're concerned that the specific tuning for the decoy traffic might prohibit this in some cases. Actually, and there's some other considerations as well. So, since we're not stream oriented, all the data has to fit in one packet. And so, if we have like an email use case, we probably are going to get around 50k average sized emails, let's say. And if we want to make like mix chat or cats in chat application, I might send really short messages like, yo, what's up? And now we're sending that in a big 50k packet. So, one thing that is clear is you wouldn't do it for all, you wouldn't have a new thing for every application. Obviously, if you have something that's going to be quite infrequent like a payment thing, then it needs, then you should be using a network with much more frequent packets and just accept that you're going to be, except the inefficiency. And there's another consideration too, which is sometimes in these chat applications, communication partnerships might be symmetrical in that we might send each other roughly the same amount of data. And in stuff like, not that I don't think mix nets are good for web browsing, but in stuff like the web, it's more like get page and then you get a bunch of information back. So there's a lot of different. So, what would the decry traffic look like that versus a symmetrical communication partnership? So, that's what I meant by some applications might not be compatible with each other to tune this decry traffic in the same mix network. We certainly would hope that most sort of like peer to peer, that, you know, most sort of peer to peer like all of your ether pad, your other sort of collaborative applications, your email, your payment network, we'd certainly hope that all that stuff could be bundled onto one thing that was sort of optimized for this email like use case. And then whether if you actually need the instant messaging network at all is another question. Alright, Mike for number one, what's your question? Can you give, well, can you give more concrete examples of software to try out? Or like, so like, like papers are great. Like, is there anything to touch to hack to whatever? Well, I mean, actually right now we're running a test mix network on several machines that we had lying around and it works great. Thanks for Mesquio and Kali for their help for that. But we don't really have any anything near production ready. Like, yeah, we're very far away. The stuff I was talking about doesn't even work. It will. So the answer to question is no, we got nothing. But but we hope we hope soon, like, I'm not sure how soon, but depends on funding depends on other things. We're working on it. Thank you. Mike for two. What is your question? I was thinking about this in the real world. You're envisioning an app where people can communicate. And I worry about mobile telephones because we're let's envision two users using this app to communicate with each other. The idea would be that one person sends a message and then sometime later, this other person takes their phone out of their pocket. There's so much going on when a phone comes out of a pocket and the screen is turned on. WhatsApp is talk to there's so much that that you can look at outside of this whole mix network that if you over a month of time can correlate who picks their phone out of their pockets every time when when person sends a message. So can't you correlate that way? And isn't that a huge problem that that sort of is completely outside of the world of the problems you're thinking about? In my ideal world, part of the solution to making the users happier with latency is the phone doesn't ding anymore. You don't get notifications. You just sometime you check your phone when you check your phone. I think that would be an important security property as well. But I would actually like it. There's a question here is would that make people actually happier with latency? Can you all of these things that are being built now are being built to sort of maximize engagement and you want to actually you actually don't want to do that anymore. You want people to only use it when they want to use it. All right. Thank you. Seems there are no further questions. So thanks a lot to Jeff. Thanks a lot to David.