 I was trying to figure out what kind of angle to attack it from. And normally, we talk about these holistic things with what we are doing in Tor. But I looked at the content that we had been doing over the past few years, also with Corona, and we haven't been out talking much about what we're doing. So the focus in this presentation is going to be specifically on the Tor network and the tools that we're using to operate and experiment with the network. We won't be talking much about the anti-sensitivity technology that we're also doing, the Tor browser, localization efforts, metrics, all these kind of things. So it's going to be a little bit on the back end kind of thing of the network. So my name is Alex. I've been involved with the Tor project since 2017. I've been leading the network team since 19. We're the team responsible for writing the software that is called Tor, like the binary you get if you opt install Tor. And it's also the binary that comes with Tor browser and the bundle that gives you the connectivity to the network. I've been doing free software since 2006. I think the most famous project I've been involved with other than Tor is the IRC, IRC client. And Olm was my first hacker camp in 13. It's really, really wonderful to be back. After we went to Olm and some of the CCC camp, we decided to do our own hacker camp in Bornhack in Denmark called Bornhack. So if you have the sort of blues when the show is over, you can pack your car and just drive to Denmark and build it all up again. And we'll do another hacker camp then. So there was a lot of hands up here when you were asked if you know Tor. So I'm going to do like a very quick primer to what it is. So everybody is sort of in on the lingo and so on. We do online anonymity and censorship resistance technology. We do free software. Everything we do is free software because we believe that the only way to sort of do secure software with the promises that we want to guarantee is to have the source code available for other people, researchers, and so on to look at. The network itself is open. There are many people who are running different kind of relays. How many people in here are running relays or bridges? OK, great. That's awesome. And we are, of course, like a large community outside of sort of the nonprofit that we're running ourselves of researchers, developers, users, relay operators, all kinds of people who are participating in one way or another in our community. So normally when we do software, you get a lot of metrics. Like we have all these systems today to collect metrics. But because we're doing anonymity, we have a hard time sort of figuring out how many people are actually using it. But the current research seems to indicate that it's somewhere between 2 million and 8 million daily users. So if we take a quick primer to how Tor works. So here we have Alice. We have Bob. Bob is usually a server that you want to reach on the internet. We have the Tor network in the middle, consisting of a number of relays operated by different kind of people. So what Alice does is that she knows the entire composition of the network. She knows of all the relays that exist in this system and decides on three nodes that she wants to connect through before reaching Bob. She starts by establishing a session key, like a cryptographic session key with the first relay. She expands it to the second relay, further expands it to the last relay, and then finally makes sort of the TCP stream out to Bob and is able to start communicating over application level protocols such as HTTP or HTTPS and so on. We call this sort of a telescope style connectivity. As you can see, it sort of resembles some of the flag poles that we have here in camp. And that fits pretty well to that. Usually when we talk about these three nodes, the first one we call a guard node. The second one we call a middle node. And the third one is the exit node. We also have something called onion services. Basically, if you want to think of onion services, you eliminate the exit node, add another node there, and then you mirror the entire display on the other side, because onion services are sort of these services that exist only in the Tor network itself. So a little while ago, our wonderful UX and UI people were asking this survey out in the Tor community on what kind of problem people were facing with using Tor in the daily. And this was sort of the order that things came up with in terms of what was more important to the general user. The speed problem is, of course, related to the performance of the Tor network. There's a lot of people who have sort of a feeling that Tor is slow to use, and they would like to have to be faster. And it, of course, also has to do with the throughput and the latencies related to that. Blocking has to do with website blocking. We see a lot of capture portals when you're using the Tor browser from websites to sort of deny access to Tor browser users. The blocking issue is particularly going to be interesting in the near future, because the Big Fruit Company have announced for their paid customers that they have like this privacy relay on their iPhone devices and so on. So they're probably going to be facing some of these similar issues that we have, or the internet will have to evolve to use smarter technology than just blocking people who seems like they're doing something bad. We, of course, have some privacy anonymity guarantees provided by Tor that we have to considerly, like, over time tune. We have the security of all the different Tor components. We have, of course, sort of the user interface of our products, which has, if you're using it on the daily, you will probably have seen that it has changed a lot over time. I personally think it's getting way better. If we look at the Tor network, it's an open network. It means that everybody can join it. Everybody is able to run a relay. If you have an IP and you have some bandwidth, you're able to set up a node. We right now have between 6,000 and 7,000 relay nodes. And like I mentioned earlier, it's hosted by probably by some people in here and by different nonprofit organizations around the planet, or just individuals who set up a machine at home because they have some spare capacity. We have in the network nine what we call directory authorities. These are specifically trusted nodes. They are a little bit like a CA in the X509 TLS system. And we also have something called the bridge authority, which is for these non-public bridge nodes that also exist for people in censored areas who are connecting into the network. If we take a look at the number of relays over time, we can see that specifically after the Snowden revelations in the summer of 2013, we sort of see the curve going up. And then it's sort of reaching a plateau where we are today. The bridges are a little bit more sort of laggy, but we recently ran a campaign to get more bridges. Unfortunately, a lot of people here in 2022 have started running bridges. Even though we sort of have hit sort of a sweet spot between somewhere between 6,000 and 7,000, the bandwidth of the network continues to sort of grow and get faster, also as the internet in general gets faster. We're going to look more on the bandwidth in recent time later in the presentation, but I'm quickly going to go to sort of the next part. So one of the small things that we are about to come up with here in the near future, the idea with many of these things is it's stuff that we are either already working on, or it's something that we have grants out to receive funding for so that we are actually able to do it. So people who are familiar with some of the technical aspects of the Tor protocol, we use something called circuits, which is the internal layer of encryption where you pass what is called cells around. This is similar to other protocols that uses packets or another synonym for that. We pass around like these 498-byte cells. One of the issues we've had is that we don't have, we have signaling mechanisms inside the protocol where we are interested in transferring data that is significantly smaller than 498 bytes. We have two proposals there that we have merged into this, what is called proposal 340, which is our way of sort of like doing RFC style network updates that are being reviewed. One of them is packed cells. It allows us to take multiple smaller cells and compress into a single cell that will then be expanded on the receiving side, whether it's the guard, middle, or exit note that is the target. We also have something called fragmented cells, which is, I'm going to return to why this one is important in a bit. But it will allow us to have very large cells that we split into multiple cells and will then be gathered at the destination and will be packed back into a full payload. So the reason that we need this is there's a lot of, I don't know if people are following these NIST competition that has been recently on post-quantum cryptography. There was, I think it was announced two weeks ago, who the winners were. But these post-quantum handshakes, we have historically moved from RSA and classic Diffie Hellman over to Elliptic Curve cryptography when we entered sort of the mobile world. And we got significantly smaller handshakes. Like there was way less data on the wire that we were transferring around. But now we are entering into this reality of this post-quantum cryptography. And the idea there is that you take these new handshakes that have significantly large, both secret keys and public keys, but also the handshake material that you have to transfer around is large in bytes. It's also larger than what we use for RSA. And this is why we need the fragmented cells to carry over this amount of additional data. We have two parts that we need to protect for people who are not aware of sort of the post-quantum stuff. The idea here is that we are worried that eventually someone will build a quantum computer and be able to decrypt the handshakes that we've been transferring over the internet. Added with that, if you have this notion that there might be an adversary right now recording all internet traffic, even the encrypted one, then they will eventually, when these machines become available, if they become available, will go back and replay all of this and decrypt every transmission that has happened in the past. So we have two parts of TOR that we need to look into and secure against this kind of adversarial model. The TLS layer is sort of the outer layer of the TOR protocol. We use ordinary TLS. We don't use let's encrypt or anything like that because we don't depend on external services. The network is sort of self-hosting, so to say. The TLS layer protects against the adversary who's recording traffic going around in the network. And then we have the TOR circuit layer, which is a protection mechanism against an adversary who's currently in the network, for example, as a middle node and is monitoring the encrypted traffic that it sees that is unpacked from the TLS stack. One of the other things that we recently deployed in the TOR 047, like the post quantum stuff is not deployed yet. That's still in sort of the research phase. But congestion control was added here in TOR 047, which was released at the end of April into its stable version. There was some alpha releases before that. What we did there was, I personally find it to be a very interesting project. It was led by TOR developer Mike Perry. And we use pretty much all of our different teams and all the researchers that we are near contact with to do this. So what happened was that we implemented three algorithms, like classic algorithms from TCP for congestion control. It's Westwood, the one called Vegas, and the one called NOLA. You can Google these things, and you will be able to read Wikipedia articles about how it works. And what the team did then was that they took this very scientific approach to it with running a ton of simulations to see how the performance were actually were they working or were they not working. We also, we found some issues with two of them, despite having spent time implementing them, namely Westwood and NOLA have this AC compression problem that is happening in some congestion algorithms in the way we are using them. So they overestimated this bandwidth delay product, which led to these bizarre conditions that didn't really work. Google also came with a BBR algorithm, which is another schoolbook congestion control algorithm. But it would suffer from these same kind of mechanisms that we saw with Westwood and NOLA. So there was no reason to further dive into this. However, the Vegas one worked extremely beautifully and worked in simulation exactly how we were going to think of it in the paper. So these plots are a little bit hard to read. It's sort of a distribution of the data that we're sending. The blue you see is the old version of TOR, and the orangey yellow thing is the modern version after congestion control. The first one is simulated as being a client in Germany, where we have, because in Germany there's Hetzner and like we generally have a lot of TOR nodes in Central Europe and in the North America. And the second image is from Hong Kong. And what we see here is that in general, people are able to achieve much higher amount of bandwidth when doing this. It's worth adding that historically, like the team that I'm in, when we do experiments, we have these two tools that we can pull in for doing local TOR network simulations. We have one called Chutney, which is a tool where you specify in these Python files that I want a TOR network consisting of 200 nodes, some bridges, like four directory authorities. And then it spawns it up on your machine and you can do some testing to see how things are working. We also have the big artillery, which is called Shadow. It's originally made by a person called Rob Jansen, who is one of the researchers heavily involved with TOR. And he has actually a team now sitting and working on getting Shadow to work a lot better. And we have finally started adopting this in our workflows. I don't know if there's anything out yet on our blog, like blogtorproject.org. But it's really interesting to look at how this was run. We did it with, it's almost worth its own talk. We use GitLab runners to spawn large amounts of simulation nodes so that the team could consistently, every time they wanted to run new experiments, it was just pushing and then they got the results and they could sort of see it a little while after. So this is the total relay bandwidth plot that you saw before where it was like going steep up and it was difficult to see. This is only from 2021 and 2022. Let's see if this thing works. So you see these two huge spikes here. To understand this plot, the top part of the plot is relays are consistently observing how much traffic they're seeing. And they're noting down the maximum value of traffic they're seeing in the network right now. And they're reporting it back to the directory authorities when they sort of call in and say, hey, I'm still alive. The orange part is where they write how much traffic they have actually seen. So these two spikes that happened in, oh, wow, it's very dead. These two spikes here comes from an experiment where what we did was that we essentially wrote a script that would iterate over the entire set of tour nodes and start just hammering traffic over it in a very short amount of time so that the relay would discover its actual capacity and be able to report back, oh, I actually am able to transfer much more traffic. The rolling window then goes out eventually and we fall back to sort of the normal kind of traffic that's happening. And as you can see, the amount of reported traffic is not increasing. So this had no impact on sort of the bandwidth of the network. What is, however, interesting is here. This is the moment where we deployed congestion control. And you can see that we sort of start increasing the amount of traffic that is in the network. But we also start seeing the moment a little bit after the Tor browser release came out with 047 and with the stable version. And we can see people start to utilize more traffic. However, as you can see, the curve is sort of cutting a little bit short after. We have this, since we are a little bit of a social experiment as well, with that we have all these people running the relays. We unfortunately also have some people who are attacking the network using sort of traditional mechanisms of denial of service through either flooding or finding different kind of things in the network that is costly. So in the last couple of weeks, we've seen like an ongoing denial of service using different kind of techniques. And it has made it pretty hard for us to see if the deployment of this congestion control feature is going well. But usually after a while, these things stop. And we also have some plans for mitigating some of this that I will return to a little bit later in the talk. This is a very beautiful plot. It shows the number of, like, the versions that are currently active in the Tor network. The two versions that are a little bit special, the 045 is an LTS version. I believe that is the one packed in the Debian stable today. And 035 was the older LTS version, which is now, like, completely gone. It's really, really awesome. Like I said, we released 047 stable at the end of April. And we can see very quickly thereafter that relay operator starts upgrading from 046 to 047. So a massive thank you to the people who have been doing this. If you're running relays and you're not upgraded to 047 yet, please, when it's not as warm as it is now, go home and upgrade. Because one of the problems for us with having this very heterogeneous network is that we need to be able to upgrade quickly to get the new features so we can actually see that they're working. If nobody is upgrading, then we will finish and wrapping up our part of it, then we'll deploy it to the network and there will be all kinds of problems. So we've spent, like, significant portions of time doing, like, reach out and trying to get people to actually upgrade. So thank you to everybody who has been doing that and continue to maintain their Tor relays in a really nice way, manner. We have some more stuff to do here in the congestion control. For people who are a little bit technical about the Tor network, we have something called Flags that the directory authorities can give to relays. We have something called Fast and we have something called Guard. Fast is that you are like a part of the faster amount of relays that we're seeing in the total network. And Guard is that you seem to be stable enough that we can use you as the first node. We have these cutoff values, which says that you can only get this flag if you transmit us, like, if you're able to transmit a certain amount of traffic. And we are likely going to bump that value up because we still have a lot of, I guess, it's hard for us to guess, but I would assume it's something like people are running Tor relays on their home connection on a Raspberry PI1 and it's probably suffering a bit. So such nodes should likely not be neither a Fast nor a Guard because if it is the Guard node, it will significantly impact people who want to have a faster experience over the Tor network. So to wrap things up a bit, if you're an onion service operator, you will also benefit from the congestion control changes. Relay operators should probably prepare if they're running big relays to set limits to how much bandwidth you want to use because as the network upgrades slowly, we will see the hoses hopefully get filled more. If you want to read more about the congestion control stuff, you should read Mike Perry's summary block, which is on this link. I will also upload the slide somewhere where you can just click on it if you can't remember it. But it's one of our recent blog posts. Another thing we're going to look at is something called conflox. The idea here is to add one more layer to Tor, which should ideally help us with some of the network performance issues or bottlenecks. And that is a technique called traffic splitting. We have a proposal for it, and we have the paper originally written by the researchers who did this. And it's worth a read if you're interested in that kind of stuff. The way it works is that normally we have this Alice and Bob image again, where Alice have established a session all the way out to Bob. One of the issues here is we know the internet is a pretty chaotic place. So we're traveling through a lot of AS numbers in this plot. It's not like these nodes are sitting on the same network or anything like that. So if one of the nodes or the network between them goes down, the entire circuit is turned off, and you lose your SSH connection, or your download stops, and these kind of things. So what conflox does is that it allows Alice to establish multiple paths to the exit node and be able to transmit on the one that is the least congested. So we use the congestion algorithm to sort of get a feedback on how things are going, and we will use the fastest one. If a connection is torn down, we can just start transmitting on the other one and start establishing a new conflox path through the network at the exit node. And the exit node will then be responsible for buffering a little bit and transmitting data, of course, in the right order. Ooh, the next slide is a hot potato. So one of the issues we've seen with the denial of service attacks is that they are often happening towards onion services. And because we have this architecture today that you have the Tor client running, and I say client because onion services are essentially clients to the network, and then you have NGNX or a web server, an IRC server, an SSH server, or whatever running on it. There is no good sort of pushback mechanism that we know from distributed systems here to sort of avoid having to flood the Tor binaries. Added with the Tor, the current architecture of Tor is single threaded and not very good at handling these many connections. We do see attacks on onion services that are problematic. So one of the designs that we can implement here that it's disabled by default, but the idea is that if an onion service starts detecting that it's being the target of something that is sort of pathological in nature, what we can do is that we can start setting a difficulty at the introduction points into the network, into the onion services, where the client have to deliver some kind of proof that they've done a computation. So you cannot do the trivial flooding here. We've gotten some help from a hacker called Tivador on this, and we are still experimenting a little bit with the entire setup. It all happened during a hack week during the ongoing Denado service a couple of weeks ago where David and Mike dived into this. So I mentioned a little bit some of the shortcomings with the Tor architecture itself and sort of the performance nature of it. One of the really cool things we're doing right now is a project called RD. RD stands for a Rust Tor Implementation. So what we're doing is that we're essentially rewriting Tor in Rust in a smarter way. And the goal here is to build it as a library, to work with Tor in general. So with Tor as it is today, it's like this very classic UNIX architecture where you have a single binary. It does multiple things. It's both a client. It can act like a relay. It can act like a directory authority that there's only 10 of and running in production. It's also an onion service client and all these different kind of things. So instead of doing it with this one binary and architected as a binary with a ton of global state and so on, what we're doing is that we're building it as a library, as a Rust library. This will hopefully make it easier for, for example, mobile developers to integrate Tor into their applications. For example, for people doing messaging or people doing, we've had these COVID-19 tracing project in Germany who's been experimenting a bit with integrating RD on iOS. Historically, we've also used a library called Stem, which is a Python library which can parse like the directory documents that we are passing around and the different objects that exist within the Tor network. And the goal here is that we get RD to also do this. So our metrics team eventually will be able to parse like a network descriptors in Rust and use that instead. Onion services is of course also a big part of it. Onion services today is a binary where you just get like TCP connections coming out of the Tor process into your service or UNIX domain circuits. And having onion services being able to not be having any sockets involved in all that it's just like API calls where you can, for example, do a callback. Instead it's going to be way, way, way nicer for developers because we will also be able to deliver more metadata about what's going on in the network. Even we have tried some of those things with Tor that was not very beautiful. We've done some pretty icky hacks for the large providers of onion services like Facebook and Cloudflare and so on who needed this. So why on earth would we start rewriting a project that has existed for so long and it's such a big part of the code base that we're running? So writing safe C and I have safe C in quotation marks, it's pretty costly. Like we spend a lot of time on like static code analysis, having external parties review stuff, even doing internal code review in the team is costly. We have to be very careful when we architect new plans that we are able to split it up into sort of smaller chunks so we won't have half of the team spending two weeks sitting and reviewing code. So added with that the network team at Tor was all very excited about Rust and all of them expressed interest in spending more time of it so it grew a little bit natural. We did an experiment before RD with replacing parts of the C Tor code base with Rust but all the, because of the Tor architecture, the all the different layers where we had to call from C into Rust and from Rust out to C became pretty nasty and we therefore decided to sort of instead do it the hard way and do it from scratch. Added with people in here are probably familiar with CVEs, we have our own tracking called a Trove which is when we have like security implicating bugs we try to track those specifically and we could look at, thanks to Nick Matthewson who spent some time categorizing our different bugs that we've used this Trove for. It turned out the 21 out of 34 of these was related to memory issues that like the C programming allowed us to do. Like I'm not a very bragging person but I think the team is a very good team of C programmers and we even make these mistakes and I think that will go for every good team who's writing large code bases in C. So the RD roadmap as it is right now is that we're working towards API stability so that we have some kind of base where we can start selling people like try this out and start experimenting with stuff. We need to at some point focus on like usability performance and stability so that things are working, things are able to reach the network every time and the way you sort of expect it. For the 1.1 release we are going to start integrating pluggable transports, bridge support, being able to run these things. There's likely some stuff with pluggable transports that we want to look at was sort of the general architecture of that. 1.2 is going to be onion services and 2.0 is going to be the release where we should be able to replace Tor as a client. Not as a relay note but as a client. We will after that begin working on relays, bridges, directory authorities and all these different services that we use in the network. Relays are going to be really interesting because of the architecture of RD. We will naturally get sort of a multithreaded architecture and hopefully we will see some performance gains and notes because of that so people will avoid having to run multiple Tor demons on their machines. This of course have some implications for what we can now call a legacy Tor like the currently existing Tor. Right now we have three out of seven of our team members is working full-time on the Rust and RD related deliverables. Our goal is to get the entire team over but we of course still have like deliverables we need to do and we also have to continue to support C-Tor because it is the one that we ship in Tor Browse it is the one that's currently running the network. So we will not be able to just completely abandon the ship and go do this other thing. We're going to continue maintaining both in parallel for a little while but you will likely see a reduction in features that are going in with the exception of stuff that of course touches the network where we need it for some of the other things like post quantum cryptography and these kind of things. One of the other exciting things that we are planning to do is UDP support. We want to support these sort of more modern technologies such as like VoIP and WebRTC we haven't dared touching into that so far because like the latency issue has been too hard like we need to solve performance before we need to go to this thing. The cool thing about that is like with congestion control we are able to do this with only having to upgrade clients and the exit nodes like the middle nodes and the other parts don't have to upgrade. So one thing I missed to say when I said that it was very nice all the operators upgraded was that we actually saw a much higher amount of people upgrading quickly who was exit node operators and that means that the congestion control part was available for the clients who had it earlier. We will use the congestion control system to decide which packets to drop. As you know with UDP the way most of the protocols is engineered is that you both have to take into account the MTU of the networks that you're traveling in but you also need to be aware that routers on your path might decide to drop individual packets if they are congestion or whatever. The way we're going to do it is historically we have been talking about doing this inter relay communication over UDP that is completely dropped with the congestion control stuff we're going to continue being TCP over TLS with TCP. The way it works is that you will establish a normal TCP connection through your like guard and so on to the exit node and then instead of creating a TCP stream you have the option of doing a UDP connection out from the exit node. There's not any good options sort of socks or these kind of things for UDP even though it is specified by ITF and so on that there is an option for it but most UDP applications don't support it. Because of that we're going to start looking into having a VPN mode for Tor. As you know Guardian Project who does Orbot on Android already have this it only supports TCP and is sort of limited by the functionality that exists in Tor. We're going to build from scratch with RD like an application which is essentially a VPN. The goal is to release at some point in 2023 I would guess it's going to be in the late part of 2023 but I don't remember the deliverable dates exactly. To do this we're doing a small component called onion mask the way it essentially works is that you should see it as a NAT router. The way most NAT routers works for UDP is that they need to sort of see a UDP package stream as sort of being a little bit stateful despite that we teach people that UDP is stateless. So what this library will be doing is that it will read and write IP packets from these ton devices that exist on the platform. It will be handling multiplexing, incoming TCP and UDP flows to RD's circuit interface and sort of making sure that things get transported. We will do onion services a bit like the DNS port works if you're like a cubes user or something like that the way you use Tor there is that you have a fake DNS server which gives you like a token when you try to connect to a dot onion and that token can be like a specifically IPv4 network or a larger IPv6 network and then when it sees connection going to one of those magically mapped IP addresses it will establish a connection to the onion service instead. We also need to do some basic filtering like that I don't want applications to this endpoint or I don't want applications from this source point. Usually when you use netstat and stuff like that you're used to looking at these five tuple like you have the source address, source port, target port and target address and of course the protocol whether it's TCP or UDP. We need to populate a little bit more metadata to be able to do isolation primitives properly. As you know in Tor browser we isolate like tabs and like the origin of the website so that they cannot sort of see that they're coming from the same. What we have identified that we is hopefully able to do on modern Android is that we can get the application UUID hostname that is the target and also the DNS cookie that is available here. So we're getting towards the end. If you want to help the Tor project in any way you can run Tor relays or bridges. You can teach others about Tor and like these things that excite you about privacy. Finding and fixing bugs for us is really nice. There's some features that we, or minor annoyances that we don't have time to do because we also have all the like research deliverables and grants and so on that we need to do. So we love getting help from volunteers and with RD it seems like the volunteers it's getting easier to contribute to these parts with when we sort of leave the C code a little bit behind. You can of course also donate to our project. At the end of the years we usually have these things where you can donate and get a T-shirt and so on. For the relay operators in the room we have a meet up tonight in the C base tent. It's at 21 and we'll meet and talk a bit about what's going on and people can ask questions and like we can have a bit of a chat about what's going on in the relay operators community. That was all from me. I have some time for questions I believe. Now you can. Thank you very much for the talk. It was very interesting also for me personally. I think for most of the people in the room too since barely none left. So now I see we are queuing up. Let's start with the mic at the front please. Hi I wonder are you using or are you planning to use Qwik for transporting data in the future? You're thinking about the inter relay communication? Yes. So Qwik was when we started the congestion control project Qwik was considered because Qwik comes with its own mechanism for both defining the cryptography like roaming and the congestion control. It was evaluated but it was a significant easier way for us to upgrade just the congestion control at the endpoint over TCP because most of the network is connected on like good lines on like the internet like in data centers and so on. So Qwik was evaluated but we decided not to go with it. There is a research group in Cambridge before Qwik was the thing that did a tour version where it does a DTLS over UDP between the inter communication there between relays. And we looked at it and it seemed more natural to upgrade TCP because we are very familiar with TCP sort of in general right? Thanks. No problem. Okay, thank you for your question. Then we go to the back microphone. Hi, thank you for a very nice talk. Some context four years ago around four years ago there was a certain Middle Eastern country that was trying to block telegram. Huh? I can almost not hear you. Can you hear me now? Yeah, speak up a bit, yes, it's fine. Okay, so four years ago there was a certain Middle Eastern country that was in the middle of an upheaval and they tried to block telegram access for people to prevent them from organizing. And as a result, Siphon had a huge uptake in bandwidth. As a result of that, they started blocking the providers of Siphon, AWS, IP blocks, et cetera. The graph that you showed that showed an uptake of bandwidth kind of correlates to a certain big event which started recently. This one? Yes, so if I'm not really seeing the months there but we are in 2022, no, no, no, that one at the end, right? Yeah, this is where we are now, the very end. Okay, but if I'm looking that correctly, the big uptake started somewhere around April, March, April? Yes. Okay, I see some correlation with the current events which are happening on European soil and another large country which is trying to again block its people from getting access to, well, organizing power and information. So there's, I see some correlation but is there actually any causation? Have you investigated and also the DDoS attacks that you mentioned, could that be also in response to people maybe trying to use Tor? I think if there was a correlation, then either our congestion control stuff doesn't work which I would be very sad about but the interesting plot is more this one because that is what the traffic that has actually been utilized. This is just what we observe that is available in the network. So this bumper and the utilization fits extremely well with the Tor browser release and the availability of congestion control. So I don't think those two things are related. If I had, I usually also have a plot in my slides that describes traffic coming in for bridges, we would be able to there see the sort of the geo IP of the incoming clients batched up in these small buckets. I think if you go to metricstorproject.org, you will be able to see it more specifically to the country and then I think you would be able to make probably a better conclusion than based on looking at the global state here in the network. Yes, I was trying to detract from the progress. No, no, no, it's fine. Thank you. It's a good question. Okay, let's go to the front mic. Thank you. Hi there, thanks for the talk. It's quite good and thanks for your work. It's very important. I do have a question about trade-offs because you did have your survey there. You had the slide, it said, we think speed is more important than privacy. And then two slides later, you had the thing about quantum handshakes, which of course are larger and that means that will decrease speed. So there are some trade-off you have to make there, I assume. So how is that decision, how is that made? So of course, this is not the kind of survey where the end result is that we drop everything we have in our hands and then only focus on speed and then like fuck the privacy. Like that would not be the thing, right? So the thing with the handshakes, remember that the handshakes, you establish a circuit to the exit node, there you have the number of handshakes, but from the exit node, you can create multiple streams out from there. So the cost of these additional handshakes, both in terms of computation power, but also in transport, is not that important in this. It's actually, until we also started having it in TLS, which is likely going to happen very soon, like with Google and Cloudflare, I've done experiments with that. This cost is very minimal to the entire part of the network. But you are right, everything we do has to sort of balance. Like are we doing something that potentially has performance impact? The proof of work stuff, historically on Apple iOS platform, we've been suffering because network extension has a ridiculously low memory limit. Far smaller than what we were able to do. They recently bumped it in like version 15 or whatever it is. And there we sort of need to be sure, like can we even do like a memory ballooning kind of proof of work there? Because if we just have a network extension and we run out of memory, then Tor closes and everything just flows over to the internet, right? So yes, there's a lot of sort of analysis being done to what are the impacts of the features that we're doing. Thanks for the question. And now we go back again to the back mic. Thank you for the talk. I had a question about the conflicts, the multi-path feature. And I was wondering if it had any security implications for kind of correlation timing attacks? Absolutely. Like everything we modify and these kind of things needs to have pretty deep analysis done to them on sort of correlation factors. And there's also like all these papers about choosing the guards and we have like this new system called manguards that also implies, is implicated here. The paper that you can read here, the path less traveled overcoming towards bottlenecks with traffic splitting has some of that analysis and we're probably going to discover more things as we engineer it. So absolutely. Every time we modify anything related to incoming flows and outgoing flows, it has to have some pretty deep analysis done to it. Thank you. Thanks for your question. And now to the question at the front mic. Thanks, Alexander. Great talk. I couldn't really follow the UDP part. You said you had a trick that you had a DNS cookie that you use. Is that just purely for the exit relay in order to track UDP connections? No, the DNS cookie is for onion services only. It's only used for onion services because onion services are a host name, but when we're working on layer three, we need to have IP addresses. So the idea there is the DNS server will respond with like some fake long IPv6 address. Then when you make a connection into the software net router onion mask, then it will be able to look up and say, hey, it's a connection to this onion addresses, establish a tour circuit to that instead. And you don't need it for TCP, right? Yes, we need it for TCP as well for onion addresses because you also on TCP is also on IP level. The UDP part, the reason that we need a VPN for the UDP but the trick there is that we don't have any, applications don't support any protocol that allow us to proxy UDP traffic. Despite like, I think socks five has a way to both do sort of service endpoints and just like connectivity outwards over UDP, but like none of the applications seems to be using it. Okay, thank you. No problem. And back to the back mic. It's not on. Mic on. No. So the current version of Tor is a zero dot four dot seven. I've been wondering why it's not a one dot et cetera version. Is there a specific reason for that? There's so many free software projects you could ask that question. No, no, there's not. Like, I'm personally a big fan of these identifiers where you use like a year and a name, but like my team is probably if they're watching this we'll probably say, no, we're never gonna do that. We've just sort of continued doing it. We don't, I don't even think we bump it. I think we went from 035 to 040. So there's not a great system. It's just like, hey, now we've done like some nice features, let's bump the middle identifier up. So now with RD we are probably going to quickly try to get up to 1.0 and then start doing like more aggressively in the modern versioning scheme of software. Okay, thanks. No problem. Okay, then one question from the front mic and we have nothing from the internet, if I see it correctly. And this would be the last question for now. Hi, I'm very excited about the Rust implementation, but I'm also a little bit worried about putting all development effort this early onto this specific implementation because there is precedence of other projects which just have a very well maintained code base and they just want to throw it all over and try to build something new. And then years later they are still working on the old code base and the new one didn't really catch on. It's a cost benefit, right? You need to sit down and look at what are the risks of doing this. Will we be able to in 10 years be able as an NGO that is not paying sort of the same kind of salaries that like the software engineering companies are doing, will we be able to hire really good C programmers that gives the guarantees that we want to provide? Like will the young and upcoming hackers, will they be sitting and reading K&R books about C or will they be doing Rust or Python or Haskell or something else? So there is a little bit of modernization thought to it as well added with the team also had an interest with doing that. I had a talk at Shah where I was talking about my pet project which was a Tor implementation in Erlang and I've completely abandoned that because now I'm spending all my time on Ctor, right? And our new Rust efforts. So yes, there is absolutely a risk to that. We hope we won't fail. That is the thing. But there's a lot of momentum in the organization and also from the partners that we have with solving some of these very holistic issues with like Tor is not a library right now. It's a freaking nightmare to integrate into anything. And we hope that we already see that people wants to test stuff even though we are not feeling like ready. You shouldn't be using this yet. So we hope that we will also be able to get some help also from the greater communities. Thank you. Yeah, with that, I would kindly ask you to give it up for Alex. And...