 Okay, that's the time to start. So I'm going to start. Daniel told me that I can't actually speak too quickly. So if I speak too quickly, it's his fault. And in 1963, JFK made what was the first transatlantic phone call via a geostationary satellite to our another world leader. He spoke from Washington DC through a satellite called Syncon 2 to the Prime Minister of Nigeria, who was aboard the US NS Kingsport, which was stationed just off Lagos. This is the first time the two world leaders had spoken via satellite, but it was also a big turning point in the ability for us to expand communications on Earth. In the 60s, the number of sort of transatlantic telephone lines available ranged at about the hundreds. So there was a very limited resource. And by flying the satellite, we managed to demonstrate that with a medium sort of investment that we could increase communication across the planet without having to deploy tons of infrastructure. So using the satellite was a big alternative to deploying with a transatlantic cable. Syncon 2 was a spinning cylinder covered in solar panels. The second to three Syncon satellites was put into a geostationary orbit. This orbit is an altitude that allows the satellite to be continuously in view. So a geostationary orbit is an orbit which is a high enough altitude that from a position on Earth, an observer sees the orbit as stationary. This orbit is created by putting yourself into an altitude above. This is achieved by putting the satellite into an orbit above Earth so that it orbits Earth once per Earth Day. So the orbit is synchronized to that of Earth. This is in comparison to Leo and Mio orbits. So a Leo orbit is a low Earth orbit. It's an orbit very close to the plane of Earth. And a Mio orbit is sort of everything in between. For Leo and Mio orbits, the orbit can't synchronize with the orbit of Earth. And so to an observer on the planet, the satellite will move across the sky. For a Leo orbit, the satellite will move across the sky quite quickly. This means that for a portion of the satellite's orbit, the satellite is actually in the shadow of Earth and you can't speak to it. And so to build up a complete communications network, you need to have multiple satellites flying. Because the satellites are moving across your perspective, you also have to track them to have good communications with them, to have a strong signal to get your link back. And this normally requires mechanical means so you get tracking satellite arrays and you have to have mechanisms to follow this through the sky. Sort of things that are in Leo are the ISS, Hubble, Starlink now, and a radium. In Mio, we have GPS and O3B. But the majority of V-SAT communications in still sort of last year has all been stuff in Geo. Because you can just point your dish at something stationary in the sky, it makes it very easy to set up a ground station and have something to speak to. Satellites today are a bit different and so this is Highless One. Highless One is parked over Northern Europe and North Africa. It has a nice big beam that covers Northern Europe. It also has a steerable beam section which can be repositioned based on customer demand. One of these is I think three to four hundred million dollars to fly so they're very expensive. They cost a lot of money to put into space but they're quite good. So my name is Tom Jones. I'm a researcher at the University of Aberdeen. I do internet engineering. I work on internet and transport protocol design and implementation. As part of that, I write standards in the ITF and it's a blame for RFC 8304 and RFC 8899 stuff about UDP. I like to hack on FreeBSD and I use FreeBSD quite a lot as the implementation part of the ITF process because running code is good code. Really, I like to try and make the internet better. I like to say I'm one eighth of the BSD hosting team or I have been since the middle of this year. For the last few years, I've been working with a European Space Agency funded project to make sure that quick works well in satellite environments. I have some ITF work in progress right now that could really do with some input and the ITF is a bit like an open source project and we'll solicit feedback from anywhere. And so the top one here, draft Jones transport for satellite is a document trying to describe the properties of satellite networks. So transport protocol designers can build test beds and harnesses and understand what's going on. And we're almost completely lacking information on Leo and MEO systems because on one hand, they don't really fly and there's not a lot of service there. On the other hand, no one will let us play with them. So if you have these and want to let me play with it, that'd be cool. If not, I'd love to take some text either on or off the record and we can figure out how to make these protocols more open to other people. This talk is an introduction to using dummy net to build networks for experimentation. It's really wrapped in the state of the art of transport protocol design. And I'm just going to get a bit lost in internet engineering because I like internet engineering. So transport protocols are the protocols that we use to move stuff around on the internet. The layer on top of IP, famous transport protocols are TCP, UDP and SETP. And they're used to move things around. So they offer services on top of the bare bones mechanism, which is IP, which just connects, hosts together and does routing. The web runs on top of HTTP. HTTP has seen a lot of evolution in the last 30 years since its inception. And it is grown from a protocol designed for dealing with very simple web pages, which are basically like what would go in an academic paper with maybe a couple of images. So what we now deliver, I mean, full-fledged interactive web applications where you can do streaming video and stuff, because I've got it over here. HTTP has grown to improve performance in a lot of ways. So the web pages that we have today now typically have sort of 200 or so objects on them. So they are packed with a lot of stuff. HTTP was designed for a page with maybe one, two other external resources being pulled onto it. And so HTTP has some scaling issues to work around some scaling problems. HTTP introduced multiple parallel connections. And so what you would end up with the web browser performing six HTTP connections to each server, described by the page, which encouraged people to distribute stuff over big CDNs. And this allowed you to speed up page performance. For a long time, Google has been working on an internal project to improve web performance on reduced web latency. And in 2012, we sort of made a hint of this with the release of a protocol called Speedy. Speedy was adopted by the ITF and Speedy evolved into what is called HTTP2. HTTP2 tried to address some of the HTTP1 series issues with performance by evolving the protocol a bit. One thing was to get rid of the TCP setup latency that impacted HTTP. HTTP offers something called multi streaming. So virtually on top of your HTTP tunnel, you're able to create multiple get requests and have multiple channels coming through. Coupled with TCP fast open, it was able to get zero RTT connection setup for subsequent connections. It added flow control. Flow control is very important because after you've asked for 100 resources, it's up to the client to then schedule when what is coming. And this is a really difficult problem actually to deal with. It was found that when HTTP2 helped, it helped quite a lot. But when it didn't help, it actually hurt. And so it wasn't it wasn't a massive benefit in all directions for coming through. Basically all web traffic ever so far has been HTTP over TCP. That is changing. So to speed up TCP, we work on the problem of dealing with the TCP handshake. And so a typical HTTP session will look like this from the TCP perspective. TCP has what's called a three way handshake. So the initiator, I'm going to call the client from now on, the client sends a hello message to the server, which is a SIN, the server response with the SIN act. And finally, the client will act with packet. And after this act, the third message in our three way handshake, we have a connection. After the act has been sent and the client can then send data. And so the soonest the server can have access to any date, any request from the server from the client is one and a half RTTs. So one, one and a half steps through here. And so the idea with using multiple HTTP connections is if you're doing serialized requests for resources on a page, you're doing this process the start of every connection. And this process takes time. And if you're doing a lot of these, it will take a lot of time and they build on top of each other. And so instead run these in parallel, you get more throughput and you're better able to use your link because you're not spending a lot of time just waiting for TCP handshakes to happen. Things in geo orbit are really far away. To get to a position where you are synchronized with our sort of it, you need to be 36,000 kilometers from the surface of the earth at the speed of light. If you're directly below, then it takes about 240 milliseconds for a signal to go from earth to the satellite. So this is very far away. And annoyingly, the speed of light is fixed and we can't speed it up. Actually, most things speaking to geo are not perfectly below the the satellite they're speaking to and they're actually at an angle. And we more see a propagation delay of around 280 milliseconds. This means that we get 200 takes 280 milliseconds for a signal to go from me to the satellite to earth, which is the one way satellite delay. So we get a run trip time or the time it takes for a message to go from one side and to come back. It sits around 560 milliseconds. Actually, in reality, we see that we get a lot more latency added and the ground station is to speak to the internet and so delays can typically range up to about 650 milliseconds. The delay hurts a lot. Anything that is going to work from the run trip time, anything that is interactive, is going to be influenced heavily by the delay. To give you sort of scale, my delay to the internet from here is about 30 milliseconds, so you're talking like 20 times as long for any of these connections. Lots of things we build the internet with, lots of the congestion control and recovery mechanisms we have and buffer tuning we have, are grow as a function of the RTT. And this is okay when the RTT is small, but it gets very big on a satellite network and things just get confused and Renault, raw, normal TCP Renault and satellite is very unenjoyable experience to use. We could talk about satellite networks a bit, and so a satellite network sort of looks like this. We have on the left of the image, we have a client device, which is a nice little desktop PC, and a client device is connected to a satellite terminal and the terminal is connected to a satellite dish and the satellite dish has a radio transmitter on it, I'm going to call it LNB today. On the there is a device which handles all of the modem stuff we're speaking to the satellite. The satellite will then communicate to a ground station, these are normally a centralized site, there might be a few of the satellite that act as a centralized site, and then the centralized site will then speak out to the internet. We can speed up the performance of TCP and satellites in satellite networks by using some performance enhancing proxy. The performance enhancing proxy acts as a middleman in the way of the communication from the client to the server and it does stuff to make the connection better. What we commonly come across are called split TCP PEPs, and so at the satellite terminal on the client side and the ground station on the server side TCP connections are terminated, so the devices there basically insert themselves in the way of the connection and they say, oh you're speaking to me. This means that the client device speaking to the PEP has a very low RTT and actually the PEP is normally running on the modem which is just like one hop away from the wireless route you're going to speak to and so you get RTTs of about a millisecond when you speak to the PEP. And equally on the other side the server speaking to the satellite customer might not know it's there, it's actually going to see sort of a normal internet scaled away and so end to end on this path the TCP connections that we know of the server on the client don't really know they're speaking over satellite. The PEP works by capturing traffic from the client and intercepting it and so in the diagram the handshake we have for TCP the SINAC we get actually just comes from the PEP and it gets terminated, it gets intercepted and with a HTTP request this means that the client will make the request before anything has had time to propagate over the satellite link. Equally when we send the SIN through the protocol that runs between the terminal and the ground station which is proprietary and of no view of will generate resource requests and start setting up to handle a TCP connection for us and it might even lead to the generation of a SIN before we've made any further requests. Data that will then transmit over the satellite link and it will get to the ground station's PEP and then it will make a normal internet connection. Using a performance enhancing proxy like this we're actually able to get packets at the client back from the server in around one RTT so around 600 milliseconds whereas normally it would have taken 900 milliseconds if we didn't have the PEP in the way before the server sent any traffic and so there's a big performance improvement here and a big reduction in latency for things that PEP can predict for us. QUIC is a next generation transfer protocol which is being published by the ITF. It's published across four RFCs this year. QUIC came from further efforts in Google to improve web performance. It's published over four RFCs the main QUIC RFC is RC9000 it's a giant document that's worth reading if you want to learn about QUIC I guess and then it's accompanied with three other documents it's accompanied with something called the Invariance which is RC8999. The Invariance are basically everything in QUIC which is in the clear on the wire which is very little. There's a TLS mapping document called RFC 9001 and a congestion control and recovery document called RFC 9002. The QUIC working group is not done the QUIC working group is going to continue to standardize extensions to QUIC in the progress right now there is a datagram extension to do unreliable transport over QUIC something called MASC which allows you to do proxying and there's also the core technology behind Apple's Privacy VPN they've rolled out this year and the QUIC working group is going to do what they've called a fast process to QUIC V2 they're not going to hang around they're going to let the protocol evolve as it needs to. QUIC runs on top of UDP it uses UDP as a substrate and so it uses UDP for port numbers but that's basically it. It is a multi-streaming protocol with a lot of flexibility in it it was before reliable in order by stream it was originally designed to carry HTTP 3 but now is actually being put to a lot of these cases. The transport protocol itself is fully authenticated and encrypted on the wire it's almost nothing visible as an outsider that has support for a zero RTT connection resumption, has support for connection migration and load balancing and so it is designed for the way we use the web today and it has mechanisms that allow more modern congestion control and loss recovery and these basically have taken a clean slate approach to what was in TCP it's been designed deliberately to resist ossification so if you think about the TCP PEP before the TCP PEP that was terminating our TCP connection so it could make its own special one it has to understand to speak TCP and be able to be involved in the communication if the PEP is not upgraded to understand new TCP developments then these things just don't work and so there's a lot of trouble deploying TCP fast open because the network could ossify around TCP QUIC has been deliberately designed to avoid ossification and part of that is the authentication and encryption another part is that basically all the protocol fields in QUIC that could be left as don't care can be greased greasing means just sticking random numbers in there and so it's designed to be hard to fingerprint QUIC and figure out what is in there there are a ton of QUIC implementations there all of them are in user space apart from one which also runs in the kernel QUIC did a great public interrupt process based on Docker where they run public test suites of all the implementations against each other and you can look this up it's still online it's still being used and there's a lot available as you as a network operator means you're going to see a lot more UDP traffic I imagine actually if you're paying attention you've seen a big spike in UDP traffic towards Google resources and towards Facebook your metadata is basically gone there is something called a spin bit available in QUIC so you can measure RTT of connections but it has to be used by both sides of the connection so you might not see it work interception boxes or dads any mss we're writing boxes or htp proxies or peps aren't going to work anymore you're going to see less of your network and you're just going to have to accept the reality I don't think you can have an argument with this one I think you're just going to have to accept that traffic has become encrypted okay so why am I looking at QUIC and satellites in 2019 and before as this protocol was being developed there was a serious concern the QUIC was not going to work well with satellites the perception is the tcp without a PEP on a geolink is is not not a good idea and it's it's not great I mean it's very unenjoyable you can go and set up one later with the stuff and talk and you can see that it's quite hard but the view was the connections without a PEP were just completely untenable and unusable because QUIC can't be accelerated because it is encrypted they basically thought this was the end of the road for geo and because these are 400 million dollar projects they were a bit worried that their investment was gone DNS acceleration is still possible for now until we see dodeployment but yeah there's a big concern about the future of geo so at the University of Aberdeen we've had satellite testbeds for developing internet protocol since the 80s over in the sort of middle left of the picture just above the dome there is our old satellite testbed which is a fallen diamond pile of satellite dishes somebody helped we built a library in the way so they they were no longer had a view to the sky anyway and on the right we have the modern visa infrastructure that we have we have two links from Avanti these look basically identical to satellite TV dishes they are a bit larger and the main difference is they have a massive L and B on the front so they can so they can speak back we have what is called a 10 to service from Avanti it's a geo service so it has all the problems built into there they kindly given us an engineering link we don't know what this means in reality we get something like 8.5 megabit down and 1.5 megabit up we typically see around 600 milliseconds of delay and then internet delay and so we can maybe hit google in about 620, 610 milliseconds there's a big variance in delay it's not uncommon to see spikes up to 8 seconds and so it's hard to use we have a limited SLA so we have a limited scale in our agreement we don't know what the limit is at some point if we move too much traffic the internet will stop so we can't just run constant back-to-back file transfer experiments because we will run out of our our resource for science so to work around this we need to use emulation emulation allows us to model networks that we don't have access to but also allows us to model accurate networks that we can't really abuse for network emulation we found dummy net or free BSD is the best choice we did look at using Q disks and TC and all of Linux's net and infrastructure but we found that it was very difficult to design a link that would give you the performance characteristics you actually require and so you could design a link that would give you the the delay you've configured or the bandwidth you configured but it was applied statistically which meant that the average would hit these limits and sometimes you get packets going through too fast and that wasn't really acceptable for the sort of tests we were going to do so to do network emulation we use dummy net dummy net is a traffic shaper bandwidth manager and delay emulator the traffic shaping means that it can consume and balance the traffic among multiple applications which is not something I'm going to touch on today bandwidth management means that it is able to enforce bandwidth limits and control the rates that applications can send out delay emulator means that we're able to emulate different lengths of networks so we're able to emulate networks from local hosts all the way out to to the moon I like to think about delay lines and mercury delay lines is a thing that existed before and they're great to look up dummy net has been in free bsd for a long time I think it was originally proposed in a 1997 paper by luigi riso but it's seen a ton of evolution and improvement since then just a short list of features are added to our bridging support it was integrated to ipfw since it was first proposed it grew packet scheduling and aqm algorithms in a really nice way that makes them quite pluggable it is the first place the sctp nap was implemented just tomorrow I understand and it has mac layer emulation it's sort of a great thing dummy net needs to interface interface with a packet classifier so dummy net doesn't have any idea really about how packets are specified and moved around it's grown this but it didn't originally so it needed to integrate with a packet classifier a packet classifier is also a firewall and so dummy net integrates really well with ipfw there are two interfaces into ipfw two interfaces into dummy net from ipfw there are pipes which I use to emulate links and there are queues which are used for doing traffic shaping and scheduling when we want to feed traffic into dummy net we just need to add a rule on ipfw and the packet will get given to dummy net and then at some point later the packet will pop out or won't or it might just pop out immediately and it is very very straightforward there's very little dummy net in this talk my first dummy net example was the one from the dummy net website and it's the first one I did and so this allows you to simulate an adsl link to the moon so you add to ipfw a pipe for your incoming traffic and a pipe for your outgoing traffic and then you configure the the four main parameters on the pipe so you configure the bandwidth the amount of buffering and the delay and there is also a packet loss rate which is available for for pipes which is left here anything you can figure with an ipfw command will be set and the defaults will be left for any of the fields you don't specify so it is a bit of a a bit of a minefield that you can hit before we can talk about emulating our network we need to figure out what the network is and what parts of the network we care about and there are four main properties that we like to talk about when we're we're talking about what the network is made of there is the delay which is the time it takes for a signal to propagate through the network and so how long it takes for a packet isent to get from me to the server i'm speaking to the bandwidth which is the number of bits per second the network can process buffering which is the ability the network's ability to accommodate bursts of traffic or its ability to accommodate when it has too much buffering is very important too little buffering and performance suffers and throughput drops and too much buffering and latency suffers it's a very difficult thing to tune there's also packet loss packets are dropped all the time ip is a best effort transport medium so it doesn't do anything to try and try and make sure packets get there on a satellite link there's no packet loss on the satellite side but there is packet loss on other parts of the network uh so we want to characterize our network before we try and build any emulation of the network and so we need to run measurements and so here's some quick rules for making sensible measurements you need to take multiple measurements and you need to apply common sense to the measurements you take you need to work from an average really you need to understand what the network can do before you look at any new protocol you definitely need to set up a baseline to make sure your network is configured properly and it's likely that your your environment will have its own peculiarities you need to test your measurements against your intuition and the understanding of the design and the configurations limitations so you need to be very careful the measurements you get that you're actually measuring something if you run an iper from you get 11 gigabit a second on your home dsl you're probably just measuring against local host and equally if your ping is 28 milliseconds then i am sure you're not using the satellite link and every time i use the satellite link i test to make sure that rkt is correct if everything matches your expectations then also be suspicious because the computers are are definitely plotting against you the the main characteristic that we talk about when we talk about satellite networks is delay we measure delay in seconds we normally talk about delay in milliseconds which is thousands of a second it's precise enough without being silly data centers will see lower delays than this i measure delay using ping ping is one of my favorite hacks in computer networking a delay will have a big impact on anything we need to feel interactive delay variation will have a big impact on anything that needs to feel interactive because it makes it hard to predict so if you have constantly you can actually start working over it a great test that's to just try using ssh over 4g and then start a download and you'll see your delay go a bit wild and it'll be really really unpleasant when we're taking delay we need to get a reasonable number of samples so we can get a min max average and a picture of what the delay variation is and the label vary based on tons of things so they're varied by packet scheduling and hardware and link clear losses networks like wi-fi will retransmit for you and so you won't actually see a loss but you will see a change in delay we found that the real satellite networks we use have a diurnal pattern of of use and so to get a real picture of what is going on we ran a ping every second for a week and we figured out where evening was and what the variants in different times of day were and we could we could actually put this out in the the delay variance so if you want to run a single test you can just ping your bsdcon.org i did this from home where i'm not right now you'll see the third packet here the delay is 115 milliseconds that means that something happened inside the wi-fi but you actually see in the summary that there was no packet loss so something something weird went on a characteristic you care more about because it's probably the one you buy is bandwidth or capacity bandwidth measured in bits we talk about millions or billions of bits so mega and gigabits there's loads of tools for measuring capacity i love ipr3 it can benchmark tzp utp and stp it can report in json and it can do a single short server mode which makes it great for integrating into testing annoyingly it defaults to measuring from the client to the server use ipr3 like this so i did a measurement from home the dash r at the end there makes the server send to me it's a secret your bsdcon ipr server don't use it and you can see that is measured my network capacity ipr3 defaults to using tcp so you can see that tcp will send it at 28 megabits per second if i measure the normal way so if i measure from the client to the server so without the dash r flag you'll see that my network capacity is is awful at home which is why i'm in the local hackerspace and why bsd now won't be getting live streams with me in them for a while because my network is just not up to it and i pursued those udp measurements udp measurements are a bit different for ipr3 udp measurements ipr4 will try to send as many packages as can and just see what happens it will default to sending at one megabit per second so if you run an experiment for this you'll see that you get 1.05 megabits per second you'll feel happy instead you need to tell ipr4 to send with the target bitrate flag and so i ran a test here so sending from a client in my house out to the internet um i asked ipr4 to send at 10 megabits per second and you'll see that we get a report from the client and a report from the server the report from the client is what it was able to send if you set this number too high then ipr3 won't get there with the single core it will struggle to do a gigabit per second so you'll see what it actually sent and it reports from the server and these are the packets that actually arrived and so running with udp tests we see that i have 3.39 megabits per second up which is dreadful and we see that somewhere 62% of the packets go away i did the get server upgrade flag here and so what we can do is we can look at the time intervals for ipr4 and you could look through these and if you see a big change in the packet loss percentage you might see that you have competing traffic and that might influence the measurements you're doing the the hardest question you're going to see today is how much buffering do i need um networks need to buffer packets um buffers are required to make sure that performance is good enough uh but too much buffering or performance suffers and too little buffering and performance suffers and yeah performance can have different meanings here and so too much buffering uh our latency will get really high because we've got to go through the buffer before we can get through the network and too little buffering means that uh the protocols we have just will struggle and so below i've just drawn out a quick uh plot of what reno looks like so this is a reno sawtooth um reno will go into grow until we see losses and then it will half the size of the sending rate um and so we grow the congestion window then we get a loss to go to 50 percent if we do this without any buffering let me get about 50 utilisation of the link um if we do this with one bdp of buffering we get closer to 80 utilisation of the link so adding buffering can help too much buffering though when we get buffer blue and everything goes the wrong way again when we talk about buffering we talk about the bandwidth delay product um this allows us to size buffers for applications um we get the bandwidth delay product by multiplying the the bandwidth and the delay um to fill the network we need to be able to send one bdp of data at a time and the sender and receiver have to be able to buffer this much uh or the reliable protocols we have require the receiver is able to receive this much traffic the bdp for satellite networks is completely unreasonable so i used calc and i made a quick table here um um when we were talking about um doing 50 megabit experiments for the transport for satellite document uh i i did the math so i found out we need to have a four megabyte uh window for us to saturate the link at 50 megabit in comparison uh data sent a link uh around five milliseconds delay um you could do 20 gigabit of traffic with that size of window so actually with satellite networks we're we're pushing on the the harder part of the network here and we need to have really large buffer sizes okay now we can talk about networks um for network experimentation we have a couple of topologies that we end up with but the most common one is something called a dumbbell network uh dumbbell network sort of looks like this where we have infrastructure at one side and it speaks through a bottleneck and the bottleneck is the internet or a router and then we have stuff come out the other side um we found that for delay emulation virtual machines just have problems because time in a virtual machine is too weird um and with the virtual machines we actually see packets coming through faster than they should be able to uh and so you need to use hard devices for this and so our delay networks are built with physical computers which means we've got some weirder stuff um the network that we used for doing satellite experiments looks like this uh it was built out of three apu2 boards all running 3bsd 12 um and they're connected up so the client connects to the router and then the router connects to the server and then traffic will go over the links to make this manageable we have a head node which we use to if we use as a point to ssh end to control any of the hosts and these are all connected together through an infrastructure switch and so the client router and server all all connect to the infrastructure switch gives us a way to manage experiments without sending traffic on the experimental link and we all connected together and it gets a bit like this uh and in reality it looks like this and in great COVID science you get to see our testbed sat on my bed um and is a nice tidy collection of cables but it was what was what we had um this network is is very simply put together at the entire config is in these slides some of it's hidden for later the the router is set up so that it acts as the the gateway on the interface connected to the client and the interface connected to the server um and it forwards packets the router's firewall rules look like these and basically it allows traffic it enforces forward direction uh delay and bandwidth on stuff from the client interface and it enforces return traffic on the server interface and that's all this is the entire firewall rule it's very very simple it's very easy to integrate into a test like this so the network actually looks like this so we get our return traffic which is traffic from the client to the server going out over the satellite link um and we can talk about the link and we get the forward link set up like this and so it's very simple and it's easy to integrate uh because dummy net is unfriendly to me um it clears out all parameters and so as we ran experiments with other configurations based on scenarios from transport for sat i ended up writing scripts to configure to control the reconfiguration of the of the test bed and so they look like this um but really this is all the interface into dummy net you need all of the hard work goes into figuring out what the network is and the dummy net does quite a good job of getting there um so we ran experiments for for satellite links inside this test bed um and while we were uh configuring the test bed we actually found it very difficult to get tcp up to speed and so if we take a plot from uh wireshark of a tcp connection i think this is moving 100 megabytes of data um we see the congestion window looks like this now i said before that rino has a typical sawtooth pattern this is definitely not that instead what we have is a staircase going up uh and if we we zoom in what we actually see is the blue line is the congestion window as estimated by wireshark and the green line is the received window as signaled by the other tcp uh and what we see is a nice staircase in the green line followed by the blue line and and what this tells us is that we're received window limited um but the received window is growing and so some digging into free bsd uh and we we actually found that the received window and free bsd by default is quite small i mean for a visa network um it's quite big for a normal network it's quite small um but it will auto tune and so it will automatically grow and so each one of these steps in the plot is 16k and you can go and calculate this out and look at it each step is 16k and that was the automatic grow size for the tcp socket buffer and so we had to do two things to get performance up we had to uh configure on the client on the server the the max socket buffer size which allowed us to have bigger buffers for the client on the server and then from iperf we needed to configure a window size so that the client on the server would signal signal a large enough window so that we could actually saturate the connection and to get dummy net to support the capacities we needed we need this this dummy net right here so with all this work done we were able to start looking at quick and we ran quiet log experiments quick and we looked at quick's performance in comparison to tcp and tcp with a pep and then we went on to look at other things with quick and there's a large EU report you can try and dig out that we'll talk about all of this but a really early discovery we made is that the quick we were looking at and so we were looking at quickly in 2019 which is an implementation from Fastly it's had a lot of development since then so this is not a valid result anymore and the quick we were looking at was having a lot of trouble saturating the link and so we made plots of the congestion window in blue and the flight size so the flight size is how much data you're sending at once and we found that the congestion window could grow forever well beyond the bottlenecks marked by the teal and purple lines on this plot that it was entirely governed by something else and we digged into this and we found that flow control and quick which is sort of quick's equivalent to doing a receive window was having a lot of trouble with the the RTT so flow control and quick is different from TCP so quick's flow control is credit based we were TCP's as windows based so window based flow control means that the the sender needs to track size of this window and not exceed what it's been told it can send and a flow control based window credit based flow control the the receiver sends you credits and every time you send a packet you spend credits and these need to be renewed by the receiver and what we found was that we were getting flow control credit release three times per RTT and it was enough to keep us at the the BDP of the network but it wasn't enough for us to actually have any congestion control happen and so we worked around this by completely disabling flow control and we we tried to offer some advice to the designers review and so the final plot I have is quick in comparison to TCP with TLS 1.2 and 1.3 so quick is in the middle in orange it does reasonably well the TCP's on the left and purple are using the V-SAT network with a PEP and so these are enhanced connections using TCP and so they're being sped up and the connections on the right are not using a PEP and so their traffic is tunneled through open VPN and we actually found that quick's doing okay I don't think there's actually lots to be worried about and there's some great opportunities to speed quick up but it seems to be doing quite well so doing this we did hit limitations of dummy net it's awful that you can't use dummy net in a virtual machine it would be nice to use dummy net with vNet but it just wasn't available when we ran these experiments dummy net is is quite old and a lot of 32 bit counter limits which are slowing it down and it would be good to be able to use dummy net above 4 gigabit in the future Luigi presented 2016 TLAM which was a a terrible bit emulation design but there's nothing public about it and we've not seen any growth from there so who knows what will happen there next dummy net is not frozen though and so this year Netgate are putting a lot of money into bringing their changes to dummy net back into FreeBSD and so this year Christoph Provost landed vNet support for for dummy net this is great because it means that you can now integrate dummy net into test suites in FreeBSD and other networks and so you could add it to your and developers test infrastructure and make them suffer and there has been pf support for dummy net for a long time it's been in macOS for quite a long time and in pfSense and it's now being ported across there are reviews in FreeBSD there's a link to the main review so you could go and test this and work on it and we think this will hopefully be MSc'd I mean I think there'll have to be a big blocker for not to be MSc'd so there will be dummy net for landing there is a work in progress high performance dummy net rewrite but it's private and so we'll see it if we see it and maybe watch the mailing list to see if it'll come through okay I have now spoken for 40 minutes I'm happy to take any questions how does quick deal with differing MTU so rc8899 is datagram packetization layer path MTU discovery I can say that because I wrote it it is a algorithm for doing detection of path MTU with networks that support ICMP and don't quick actually requires that paths be able to support 1200 bytes and so all the initial handshake packets are padded out to 1200 bytes and so it actually deals with a lot of the path MTU problems and then this algorithm we wrote is the next step a lot of work has gone into quick to make sorry the question is can quick be misused for DDOS attacks in the same sense a DNS amplification attacks short request large answer no no we don't think so it's been designed to avoid this and so part of this is it requires the packets before the handshake are padded out to 1200 bytes which cuts out a venue for amplification I'm pretty sure servers aren't meant to send more than 1200 bytes in response they're meant to stick to the maximum size the client is sent but even if there's an application it's going to be quite small with the 1200 byte limit and there's a lot of crypto involved and a lot of authentication for the source so it should be avoidable I'm sure there's an amplification attack hidden somewhere but there's been a lot of work to avoid this there's been a lot of thought put into quick to make sure it's safe can physical networks be replaced with jails and V image uh yeah for a lot of cases they can so I think for traffic shaping cases you could probably replace them uh for delay emulation you could probably replace them as well the issue we had was the because we're looking at the performance of the transport protocol uh if we get a packet which has come through too quickly then it will uh it will be an unfair experiment to run and so from a scientific perspective this wasn't allowed uh so we had to use hardware to make sure we were getting correct delay emulation but um you might not care I mean if you're running your web proxy and you just want to see how things work yeah you could still try it it won't be as scientific and you might not be the same as a real use case but it's definitely better than better than nothing what's one feature you wish dummy net had I wish that dummy net could have network rates described in mega and gigabits per second um but when I have tried to use them recently the client has given me bits per second instead maybe Christoph has fixed this that's not really a feature that's a bug um I'm actually quite happy with what dummy net offers right now it has been enough to build these experiments um I'm sure there is stuff available in TC and Linux and M and Linux that would be nice to have but it's it's not been ported yet I think we are lacking a lot of good examples of how to use dummy net I've definitely seen papers where people are trying to look at quick on satellite using net em and the networks they've built have not actually been accurate enough for some of their claims so it's hard and this isn't an easy thing to do it's really difficult to test and verify these networks the question at the beginning of the talk you mentioned that there was fear of quick being implemented in satellite networks because it would kill investments what needs to be implemented in satellite does not OSI model apply here anymore so the OSI model applies um the the meddling that perhaps do with traffic make the web much easier to use on a satellite network um people are very fickle in their enjoyment of the internet and the delay you get on a normal network can be actually be excruciating if you're talking about the best case where there's no packet loss and nothing is going wrong then it is not fun to use if there is packet loss and things are being retransmitted a lot it can be horrible to use people are buying these these services um and if they don't like them they might just not buy them and the fact is that it is entirely possible that um that another service maybe with less capacity can become available so people might just install wireless links and point-to-point links um that will serve the matter field but yeah i mean it's it's like a weird fear but people are just scared of everything that's new new is scary okay are there any more questions okay i don't i don't see anything else thank you for for joining me today i hope you enjoy the rest of your bsd con it's it's been great getting uh most of the conference to happen and i can't wait we can do this again in person