 So we went through the, what is congestion in the internet yesterday, the causes of congestion and the result, that is, what's the thing that we don't want to want, don't want, which is the increase in delay and eventually drop packets because they cause problems with performance for our applications. So if we have no way to control congestion, eventually our network or our internet will fail. So TCP has a mechanism for controlling congestion built in and that's the main form in the internet today for how do we control congestion, TCP does it for us. Just going back, recall that there are three main features of TCP, there's the reliability feature which is retransmissions. We send data, if something goes wrong we resend the data, retransmit and the retransmissions occur if we have a packet loss and just so we remember, the types of packet loss, so with the retransmission scheme in TCP for reliability, we have two events that indicate a packet loss. We have a timeout and also this special feature of three duplicate acts and the names of the retransmissions schemes are called simply basic retransmit which happens when we have a timeout, that is I send data, I'm waiting for an act, I don't receive an act within some time, I retransmit, that's basic retransmit. The problem with that is that waiting for a timeout means that we have to wait in some cases too long which causes a low throughput because we wait a long time before we send anything and because in many cases we send multiple data frames and we expect to receive multiple acts, if we lose one of those data frames, one of those data packets, then it's possible that we'll receive multiple acts with the same acknowledgement number that as I send a sequence of six data frames, I expect to receive the respective acts back but because one of those data frames was lost, I receive an act only for the one that was successfully received in order and I eventually start to receive duplicate acts, that is acknowledgments with the same act number as the previous one and if we receive multiple duplicate acts and TCP defines three duplicate acts, then that is four acts with the same act number then we assume that something's gone wrong, a packet has been lost so we then retransmit, so two indicators of a packet loss and both result in a retransmission. The other feature that we went through last week was flow control and the idea is focusing on the end points, the source computer, the destination computer, flow control is controlling the speed at which the source computer sends so that it doesn't overflow the buffer at the destination computer and one of the parameters we have associated with that is the advertised window, AWND and so we can think that the source has a parameter called the advertised window, the value is set by the receiver and the source, the amount of bytes it's allowed to send before it has to wait for another act is limited by the advertised window, but in fact we have a third mechanism, congestion control and that's what we're going through now and what we introduced yesterday, we have a new parameter, CWND, the congestion window, similar to the advertised window it is used to control how many bytes it writes, the source is allowed to send and in fact the source is allowed to send is limited by the minimum of these two parameters now, so imagine the source now has two parameters, the advertised window, so many bytes and the congestion window, the number of bytes it's allowed to send at some point in time before it has to wait for an act is the minimum of these two, it can send that number of bytes then it will have to wait approximately one round-trip time to receive an act and then that may then allow the source to send more bytes, so we can send a window size every round-trip time on average where the window is limited by these two parameters, the minimum of advertised and congestion and we mentioned yesterday that the congestion control is about making sure we don't overflow the routers in the internet in between the source and destination that is do not cause congestion in the internet because that increases delay and leads to packet drops, so we mentioned also that an indicator of congestion is packet loss, so when I start sending my TCP source start sending data, if there's some packet loss then that indicates that there's some congestion in the network and our response to congestion is to slow down and to slow down we reduce the congestion window because the congestion window controls how much I'm allowed to send per round-trip time, so if I reduce the congestion window I can send less per round-trip time that is my sending rate reduces, so these three mechanisms, retransmission, flow control, congestion control, although we try and describe them separately and independently they're all related in that congestion control makes use of the events for a packet loss, which is also used by the retransmission and our source, the speed at which it can send is limited by both flow and congestion control, whichever one is higher, which is lower, that's the sending rate, so the main things with TCP congestion control is how does the TCP sender limit its sending rate, well the sending rate is proportional to the window size and the equation we've seen, sending rate is approximately equal to the, let's generalize, the window at the source divided by the round-trip time, we can send a window of bytes every round-trip time, where the window is the minimum of the two windows we have, advertise window and congestion window, so how does TCP send a limit its sending rate by changing the congestion window, that's the answer there, because we cannot control the round-trip time, in a network we cannot control what it will be, that's a characteristic of the path, but we can control the congestion window, that is the source can increase and decrease it, how does the sender perceive there is network congestion, so somewhere between the source TCP host and the destination host there may be congestion in the routers, how does it know, packet losses, packet losses are a good indicator of congestion occurring so as we said yesterday a packet loss, or a packet loss event, a timeout or three jupyger acts is an indicator of increased congestion, and now what we want to go through is, well how do we respond, what do we do when there's increased congestion, the congestion control algorithm, so limiting the sending rate is the minimum of the two windows divided by round-trip time, what we're focusing on here is the congestion control, so to keep it simple let's assume that the CWND is always smaller than advertised window, just for our examples here, that is let's assume the advertised window is very big, which means CWND will always be the minimum, so the congestion window controls the sending rate, in reality it may not be the case, just for simplicity for now, so we can concentrate just on the congestion window, and we said two different types of events, timeout, three jupyger acts, timeout is an indicator of a lot of congestion, whereas three jupyger acts is an indicator of a little bit of congestion, the more congestion we perceive that we think there is, the more we're going to slow down, so depending upon the event here we'll behave or respond differently, so let's go through the algorithm or the steps for how to respond, the other thing which we haven't said, congestion control is about decreasing the sending rate when we experience increased congestion, but also the other way, when we experience less congestion in the network then we increase our sending rate, so what the source should do is modulate is to maintain its sending rate, if there's congestion reduce, if there's no congestion increase, and so that it maintains a sending rate to avoid congestion, how do we increase or how do we detect decreased congestion the presence of acts, similar here, when we said we send data, if there's a packet loss that indicates increased congestion, but if there's no packet loss, my data gets there and the acts come back, that's an indicator of decreased congestion, that everything is going well, so the arrival of acknowledgments indicates that the congestion is going down, we can send more, so the faster the acts arrive the larger the decreasing congestion and the more I can send, so we have two factors changing how much we're allowed to send, losses decrease sending rate, acts increase sending rate, but how do we do that, we have what's called a congestion control algorithm and there are different ones, we're going to go through a very simple one and it has different components what we're going through, so we'll go through each of them separately, the first component we'll go through is additive increase, multiplicative decrease, that will be the way that we increase and decrease our congestion window, AIMD, slow start, SS, slow start is a special mechanism to start the data transfer and then how to react to packet losses, what do we do when there is a packet loss, so we'll go through each of them, some terminology that we'll use, some of you have seen, we have a congestion window, CWND is the abbreviation, measure that in bytes, so that's a counter of bytes, the round trip time, measuring in seconds, milliseconds and we'll assume for simplicity that in any of our examples the round trip time is constant, so when I start a TCP connection we have some round trip time and it stays the same while we're transferring data, in reality it may go up and down but for our simple analysis let's assume it's constant and the other one, maximum segment size, MSS, this is the maximum number of bytes that TCP will send in a segment, in a packet and it's determined normally by the path because if we have a source and some routers in between, between the source and destination host, each of these links normally have a limit as to the maximum frame that they can transmit, how many bytes they can transmit in a frame, for example in Ethernet it's about 1500 bytes, so in Ethernet frame normally the frame has 1500 bytes of data plus or minus a bit because of headers and other technologies have their own limits, so let's say this link has a limit of 1500, the second link of 1000 and the third link 2000, the goal with TCP is to send a segment such that across the path segmentation will not be needed or fragmentation of the IP datagram will not be needed, so when TCP sends a segment at the source, how big should it be in this example? What we want is when we send this segment from the source as it goes through the path it doesn't have to be split up into smaller segments or smaller IP datagrams, so how big should it be at the source? 1000, ignoring headers and so on, it should be as big as possible because the larger it is, the lower the overheads of the headers, but it should be small enough so that it will not be fragmented by any of the links. If we had 2000 bytes here, in the first link we'd have to break that into two packets one with 1500 bytes, the second with 500 bytes because the maximum size on the first link is 1500, if here we use 1500 it would be okay across the first link, but on the second link we'd have to fragment that because the second link has a maximum size of 1000, so in fact for our path the ideal maximum segment size is that which is the minimum of the link maximum packet sizes, so this 1000 would be our maximum segment size used or set by TCP, usually TCP determines that at the start of a connection or assumes some value, so we're going to assume that we know the maximum segment size when we discuss how TCP works and we'll assume some value for some simple analysis, so that's the size of the segment that TCP sends or at least the maximum size, it will not send any larger at any one time. Last thing we're going to assume is that whenever we send data segments from source to destination as the data segments are received by the destination the destination will send one act for every data segment received, which is what we'd expect, I send 10 data segments to the destination, destination receives 10 data segments, one act will come back for each one received, that is I'll receive 10 acts back, that's what we'll assume here, in fact in TCP it doesn't have to work like that, TCP has options to allow you to send one act for multiple data segments received, but for simplicity one act for every data segment, so we have maximum segment size, let's say it's fixed or it's given, round trip time for a particular path is a characteristic of that path, it's fixed and congestion window is what our algorithm is going to change. The first part of the algorithm is AIMD, additive increase, multiplicative decrease, how does it work? Two parts in fact, additive increase is the algorithm to increase the congestion window and multiplicative decrease is the decrease, so if there's no congestion detected, we're receiving acts back, everything's okay, then increase our sending rate in an additive manner and we'll see the difference between the two, in a linear manner, so the idea is that we start at some slow sending rate and if there's no congestion detected then increase our sending rate and keep increasing and keep increasing until there is some congestion detected and then we'll drop it down again, so additive increase is the way that we increase the sending rate and it does so in a slow manner, the AIM is that the sender will increase the congestion window by one maximum segment size every round trip time, so let's say we have, let's put some numbers to that, let's say the maximum segment size is a thousand bytes, round trip time 20 milliseconds and let's set the start of the TCP connection, and let's set the start of the TCP connection, so we've established a connection from source to destination, we want to transfer some data, let's set the congestion window the initial value to one maximum segment size to 1000 bytes, so we need some initial value and in fact the initial value in practice may be two or three maximum segment sizes but usually quite small, so the idea is that if there's no congestion what we want to do is increase our sending rate and the equation I just removed here was sending rate is equal to congestion window divided by round trip time, round trip time doesn't change, let's assume it's fixed, so to increase our sending rate we increase the congestion window and the goal is to try and increase it by one MSS by 1000 bytes every round trip time, so we could draw that as, congestion window is 1000 bytes meaning we're allowed to send 1000 bytes, so we send 1000 bytes, how many segments? One, our maximum segment size is 1000, we're allowed to send 1000 bytes of data, so let's send one segment, take some time to get there, some time for an act to get back, how much time? 20 milliseconds, so here's 1000 bytes of data, we send one segment, 20 milliseconds later we get the act back and as we receive an act we're allowed to send more data, how much more? Well the idea is that in additive increase, increase by one MSS, one maximum segment size, so now, let's make it a bit clearer, initially the congestion window was 1000 allowing us to send 1000 bytes, after one round trip time receiving an act let's increase it to 2000, we've received an act, the reception of an act indicates everything's okay with the network, no congestion, so let's increase it to 2000, how many segments do we send? Two, make them a bit smaller, and we send two, one immediately after another and eventually we'll receive the acts, my diagram's not very good in terms of scale, the time to transmit those segments will usually be very small compared to the round trip time, so my scale here is not so good, we, at what time do we receive the first act? At what time do we receive the first act? About 40, time 40 milliseconds, because we started at time 20 and it takes a round trip time of 20 to get the act back, so receive the first act around time 40, what about the second act here, what time do we receive that? This is where my diagram's not to scale, so don't be fooled by my scale, approximately what time do we receive the second act here, and it's not 60, and it depends on the transmission rate, but let's say the time to transmit this is quite small, so the round trip time depends mainly upon the propagation across the network, the queuing in the routers and to get back, the time to transmit is quite small, which means if we receive the first act around time 40, then the second act will be shortly after that, maybe 41, because small time to transmit the data and then immediately send the next data frame, data segment, and therefore the second act will come back a little bit after the reception of the first act, the exact time we cannot calculate here, we'd have to look at the details of the network, but the point is approximately the same time, a little bit after the first act. This assumes that the round trip time, the main contributor is the path delay, not the transmission time, which will be the case in most networks. Effectively we send two segments and about 20 milliseconds later we get the two acts back. Of course one comes first, but approximately 20 milliseconds later we get the second act back as well, so it's not exact, but around about. Yes, because let's say we put numbers to them and let's say we transmit start at time zero and it takes 0.1 millisecond to transmit, then we finish transmitting at 0.1, the first one, and then after 20 milliseconds because the long delay to get there and to get back, at time 20 we start transmitting 20.1, the next one, 20.2, receive the first act at 40, receive the next act at what, 40.1, which is approximately 40. So yes, it's as if we transmit it at the same time. Let's get rid of them to make it a bit clearer. So I'll just say here we receive the acts at about time 40, that is after another round trip time. First round trip time, second round trip time. Now let's increase the window up to what value? What do we want to increase our window up to? Aim is to increase by one MSS every round trip time. So let's make it 3,000. Allows us to send three segments and we'll run out of space. Send three segments after approximately one round trip time we'll receive three acts and then increase to 4,000, 5,000, 6,000 and so on. A slow increase of the window every round trip time. Any questions about the idea there? So first the aim, increase by one MSS every round trip time. That's a slow increase, that's the design of the algorithm. And in this case we've tried that every round trip time we receive an act or in some multiple acts, increase the window allowing us to send more. In this first time round trip time we've delivered 1,000 bytes to the destination. So in say in this first period of 20 milliseconds we delivered 1,000 bytes to the destination giving us a throughput of what? If it takes 20 milliseconds to deliver that 1,000 bytes what would you say the throughput? We could calculate 1,000 bytes in 20 milliseconds, 8,000 bits. What have we got? 8,400,000 bits per second, 400 kilobits per second I think in that case. So you can calculate quite simply congestion window divided by round trip time. And here it would be 2,000 divided by 20. So an increased sending rate and here 3,000 divided by 20. So this was what 400 kilobits per second 8,000 kilobits per second 1,200 kilobits per second and so on. We increase by 400 kilobits per second every round trip time. We'll see an example later with numbers to it. So remember an increase in the congestion window is an increase in the sending rate and that's what we're trying to control. What else can we say? Let's show a plot, give an idea. They don't capture it so well. Let's not show a plot because I don't have one. I think if you plot this over time you see every round trip time increased by 1,000 the window size or increased the sending rate by 400 kilobits per second. So we've got a linear increase in the sending rate. This phase of additive increase is also sometimes called congestion avoidance. Now we said the goal was to increase by 1 MSS every round trip time. When we implement this that's not easy to measure and to implement. That is for the source to do that what it would need to do is to record the round trip time and then at every every after say 20 milliseconds increase its congestion window. The round trip time may vary. The way one way to implement this to achieve or almost achieve this goal of 1 MSS every round trip time is to use this which works by incrementing the congestion window every time you receive an act. It's a little bit better to implement from the sender's perspective. You take the old congestion window, whenever you receive an act take the old value, add MSS MSS squared divided by the old value to get the new value. Let's see how that works in our example. So the equation, we'll use this equation and see how it works in this case. Whenever we receive an act let's modify the congestion window. Initially it's 1,000. MSS is 1,000. When we receive the first act what is the new value of the congestion window? Try and calculate the new value when we receive the first act using this equation. Initial value 1,000 what's the new value? This is applying this equation which is used in practice to implement the goal of increasing by 1 MSS every round trip time. We'll see through this example and see how it approximates our goal. So we send one data segment. Congestion window is initially 1,000. When we receive the act we increase the congestion window. We take the old value 1,000 plus MSS times MSS divided by the old value and we get 2,000 in this case. So our initial value was 1,000. When we receive the first act the new value is 2,000. We'd like these red values to be the same as the black values. We're trying to approximate the black values here. Receive the next act. What do we get? Well we use the same equation. We now, our new value is what do we get? We take the old value which is now 2,000 plus MSS squared. MSS stays the same. It's always 1,000 in our example. So plus 1,000,000 MSS squared divided by the old value, 2,000. What do we get? 2,500. 1,000,000 divided by 2,000 is 500 plus 2,000. So this is the value after receiving this act here, 2,500. But then we almost immediately after we receive another act. So let's calculate again for this next act. We take 2,500 plus a million divided by 2,500 equals 2,900. 400 plus 2,500. So that's the value after receiving this second act here. Is it the same as our desired 3,000? No, but it's very close. The idea is that after one round trip time increased by 1,000 after each round trip time. The way that TCP implements this, instead of having a timer and count round trip times, is simply whenever you receive an act, apply this algorithm. And this algorithm approximates the idea of one MSS every round trip time. And we see in this case, we desired 3,000, we got 2,900. And you can follow along that it doesn't get exactly the value, but it's approximate. So we get an approximate increase of one MSS every round trip time. Yes, it's slightly smaller than normal. But it's just an aim. This is just an aim. It approximates that roughly. And it's much easier to implement because all the source does, it doesn't have an extra timer at the source. Just receive an act, change the value. Whenever you receive an act, change the value. Rather than trying to record the round trip time and having a timer that expires to tell you when to increase the value, it's just an easier way to implement. So the goal, still, is slow increase of the congestion window. We're still focusing on the case where there's no congestion, a slow increase. Okay, now let's avoid some congestion. What do we do if congestion is detected? The general approach, we want to decrease the sending rate. That is, we want to decrease the congestion window. And the general approach is if there's a loss detected, halve the window. Cut it in half. Of course, don't go too low. Normally too low is just one MSS. So once you're back to one MSS, you'll no longer cut it in half. So if the window is 10,000 bytes and one packet loss is detected, then the window will drop down to 5,000 bytes. And this is the multiplicative decrease, an exponential decrease, a fast decrease. The increase is slow. Every round trip time a slow increase. But when congestion occurs, the decrease is fast. That is, we drop rapidly. In fact, we cut it in half. So we halve the sending rate, every packet loss. And combined, we get additive increase, multiplicative decrease. The algorithm here, or the general idea is simple. Just whenever there's one packet loss, halve the window. We'll see that there are some variations of that later. The idea is something like this, where over time during our TCP connection, this is the congestion window. The other way you can look at this is this is the sending rate because they're proportional. We start, or in fact this, we usually start lower, but in additive increase, we increase the sending rate. We increase the congestion window. When there's one packet loss, or a loss event, we halve the value. And then we go back to additive increase. Increasing, everything's okay. Keep increasing. Everything's okay. Another packet loss, halve the value. And keep going for as long as our connection runs. That's the idea of these two mechanisms. With you see that the idea is that this tries to send as fast as possible, but trying to also avoid congestion. We'd like to send, or have a high congestion window, that is a high sending rate, because we want to get our data there as fast as possible, but we don't want it too high that it causes packet losses. So whenever we get a packet loss, reduce again. And then increase. Keep increase, multiplicative decrease. Any questions before we move on to the next component of our algorithm? There are two features of congestion control. Before we move on, let's do a simple calculation. If I continue my diagram, I'm removing everybody in the same example, let's say that we're using this congestion control algorithm over a link or a network that has some capacity of, I've got some answers. Let's say our link had a capacity of 100 megabits per second. TCP doesn't know that at the start. So what TCP does is start sending slowly. Just one congestion window. When it receives acts, it increases its sending rate. This is the additive increase and keeps increasing the sending rate. The idea is that we want to increase the sending rate such that we approach the capacity of our link or our network. It's more complex when there are multiple senders and that we don't know what the capacity is. But let's say we know the capacity is 100 megabits per second. So we should be able to send at 100 megabits per second. What is our sending rate after the first round-trip time? Well, in the first round-trip time we sent, going back to our diagram, we sent 1,000 bytes, one segment, in that period of 20 milliseconds. 1,000 bytes in 20 milliseconds is 400 kilobits per second. That's our sending rate. We'd like to get our sending rate up to the capacity, assuming there's no one else sending. In the second round-trip time, from our diagram, we sent two segments. So it's 2,000 divided by 20, which is 800 kilobits per second. And the third round-trip time, we were allowed to send three segments. The congestion window was 3,000. So it's up to 1,200 kilobits per second sending rate. And we keep going 4,000 after a round-trip time and keep going. Every round-trip time increase the window by 1,000, which is equivalent to increasing the sending rate by 400 kilobits per second. 400, 800, 1200, 1,600 after 5 round-trip time, 2,000. Keep going. How many round-trip times does it take to get to our capacity? To get my sending rate up to the network capacity? Again, how many round-trip times does it take to get to our capacity? Not 100, not 10. Look at every round-trip time we're increasing by 400 kilobits per second. 400, 800, using additive increase, that is. Additive increase. Additive increase. We just keep increasing by the same amount. So, correct. If our capacity is 100 megabits per second or 100,000 kilobits per second, and we're increasing by 400 kilobits per second every round-trip time, then after 250 round-trip times, we'd reach the capacity in this example. 250 times 400 kilobits per second, we're up to 100 megabits per second. Just in this example, of course, just with the numbers that I've chosen. So the point is, we start slow, 400 kilobits per second, and TCP, if there's no congestion, slowly increases its sending rate, and keeps increasing, trying to approach the capacity. The capacity in this case 100 megabits per second, if there's no one else using the link, we should be able to send it 100 megabits per second. It took 250 round-trip times for my TCP source to reach its highest sending rate. 250 round-trip times is 200 megabits times by 20 milliseconds is five seconds. 250 times 20 milliseconds. That is, you start your file transfer. You're sending slowly that file. Then you're increasing the sending rate, and hence the throughput. It's increasing. It's much lower the capacity than capacity at the start. It's only until five seconds after you start it, as your sending rate and your throughput reach the capacity. So it takes a long time to reach the capacity in this case. That's a problem with TCP, or at least that's a problem with additive increase. When you start your TCP connection, if there's no congestion using additive increase, it's slow to reach the network capacity. We're not very efficient. If I'm sending at 400 kilobits per second, but my capacity is 100 megabits per second, that's very inefficient. That's a problem. Everyone understands this concept. We'd like to send as fast as possible. With additive increase, we start slow and gradually increase it, but the problem with that is it may take a long time to reach what we're allowed to send at. So we're inefficient for a long time. In this case, inefficient for, well, we're not 100% efficient until five seconds after the start. Let's say you're browsing websites. The time for a TCP connection may just last for 100 milliseconds. You may only transmit data for 100 milliseconds. So you're using a very small sending rate while accessing that website. That's a problem with additive increase. How do we fix it? We increase the rate at which we increase our congestion window. With additive increase, we have a slow or a linear increase in the sending rate. The problem with that is it takes a long time to reach capacity. The solution is when we start our connection, our TCP connection, again, we start slow, but we have a rapid increase in the sending rate when there's no congestion. The name of this mechanism is slow start. Slow start in that we still start sending slowly, but compared to additive increase, we have a rapid increase in the sending rate, an exponential increase in fact. So at the start of a TCP connection, the sender sends at a slow rate. That's what we have. That is the congestion window says one maximum segment size. With the slow start phase, with this different mechanism, we have a very fast increase in the congestion window, a very fast increase in the sending rate, an exponential increase. The idea where every time, well, what we do is we aim to double the congestion window every round trip time. With additive increase, increase by one MSS every round trip time, with slow start double every round trip time. The way to implement slow start is to increase the congestion window by one MSS every act. Every time I receive an act, increase by one MSS. So initial value, 1000, send one segment, receive one act, increase the new value up to 2000. Send two segments, receive two acts. When you receive the first act, increase up to 3000. When we receive the second act, increase up to 4000. Let's show that with a quick calculation. These were the values we got with additive increase in the first two instances. We started at 1000. With additive increase, we had something like this, where we send one, after one round trip time, increase to two, after the next round trip time, increase to three. The idea with slow start is we send one at the start, after one round trip time, increase to two, double, we receive two acts back, after the next round trip time, double again. Don't go from two to three, but go from two to four segments, or 4000. We receive four acts back and then we're up to eight. So 1000, 2000, 4000, 8000. That's the idea. Whereas with additive increase, it was 1000, 2000, 3000, 4000. Linear increase, exponential increase. So slow start rapidly increases the sending rate. How does it do it? Using this algorithm. Increase by one MSS, every act received. Send one, receive one act, goes to 2000. Send two segments, receive the first act, goes to 3000. Receive the second act, we're up to 4000. Send four segments, currently we're at 4000. We're going to receive four acts for every act. Increase by one MSS. Value 4000, receive four acts, increase by 4000, up to 8000. So doubling every round trip time approximately. Whereas additive increase is slow start. Additive increase increased by one MSS, every round trip time. Both of the mechanisms are used when there's no congestion, okay? When everything's okay. It allows to increase our sending rate. The difference is that slow start allows us to increase our sending rate faster. A rapid increase, which means we can approach our capacity quicker. With additive increase in our example, it took five seconds. That is 250 round trip times to reach our capacity. How long does it take with slow start? With additive increase, if we used additive increase, from the start of our connection, we increase by 1000 bytes every round trip time, or our sending rate increases by 400 kilobits per second every round trip time. But if we use slow start, and don't be confused by the name, even though it's called slow start, it's a rapid, it's a fast increase. If we use slow start, we effectively double every round trip time. How long, how many round trip times before we reach the capacity of 100 megabits per second? Anyone can calculate? Yes. And then the second act, 4000. Now, so there's two parts of both of these algorithms. There's the aim, and then the way that it's implemented. With additive increase, we said the aim increased by one MSS every round trip time. With slow start, we double, I don't know if it's written here, we double the window every round trip time. That's the aim. With additive increase, 1000, 2003, 4, 5, 6, in the same time with slow start, our aim is 1000, 2000, 4000, 8000, 16000, a doubling every round trip time. That's the aim. But both of them have different ways in which to achieve that aim, the way that they're implemented. And both of them increase the window whenever they receive an act. And an approximation in additive increase is this equation. It approximates this 1000, 2000, 3000, 4000 by increasing every act according to this equation. And similar slow start has this implementation. Whenever you receive an act, whenever you receive an act, increase by MSS. But remember the aim, slow increase, rapid increase. Yes, the sending rate of the slow start phase is doubled every round trip time. Yes, so here this was with additive increase. Let's do it for, where it took 5 seconds, let's do it for slow start. Remember same capacity, 100 megabits per second, same values as we had, slow start. Start the same with additive increase. We start with a window of 1000 bytes, 1000 bytes in 20 milliseconds, sending rate of 400 kilobits per second. Second round trip time. What's the sending rate? We double, okay, 800 kilobits per second. Three round trip times. Sending rate, 1600. We double this. Whereas here we were just increasing by 400. Four, 3,200, 6,400, 6, I'm going right here, 12,800, 7, about, or 25,600. Over here, 8, so I'm just doubling this each time, 51,000, 9, yeah, about 100,000. Our capacity. Remember our capacity was 100,000 kilobits per second. With additive increase it takes 250 round trip times to get to the capacity. With slow start it takes 9 round trip times. Much faster. Or 180 milliseconds. So that's the advantage of slow start. We get to the capacity faster. So don't be confused by the name slow start. Yes, we start slow, but we rapidly increase. Similar to additive increase, this is just used when there's no congestion. When we get axe back. If there's a packet loss, then again we reduce the sending rate. And it gets a bit more complex. What we do is we use slow start at the start of the connection, but then there's another parameter called a slow start threshold. Often written as SS threshold. Once our congestion window reaches this special parameter, we revert back to additive increase. This is how they come together. Here's an example. I've got some values that I use to calculate this, but we see just the plot of the window again proportional to the sending rate. The sending rate plot is the same shape. We start with 1,000 bytes here. And in this case we have a round trip time of 100 milliseconds. So every 100 milliseconds we increase. Initially we use slow start, so we double to 2,000, 4,000, 8,000, 16,000. But we don't continue to use slow start. There's this extra parameter called the slow start threshold, which says once our window reaches this value, revert back to additive increase. And remember additive increase is increased by from 16,000 up to 17,000, increased by just 1 MSS. 18, 19, 20, 21, 22, 23 and so on. We'll explain the top one in a moment. So in fact we use both of them. Slow start at the start of your connection. We start slowly, but rapidly increase until we get to some threshold and then revert to additive increase. In all cases there are no packet loss in this example. There's no congestion. So when you start downloading a file, the congestion window in the basic TCP starts small, rapidly increases and then has a slow increase. What's the top line here? We said, okay we'll explain in a moment, on this case, 1,000 is what we're allowed to send. We send 1,000, we receive one ACK. When we receive an ACK, increased by 1,000. That's the algorithm. We receive one ACK, so we're up to 2,000. Allows us to send two segments. We send two segments. We receive two ACKs. We're at 2,000, receive the first ACK, we're up to 3,000. Receive the second ACK, we're up to 4,000 now. So maybe 4,000 should be, after receiving the second, when we receive the second ACK, we've increased to 4,000. Allowing us to send four segments. We send four segments. We're going to receive four ACKs. Every time we receive an ACK, go up by 1,000. So we're at 4,000, we receive four ACKs, we go up to 8,000. Yes, but I'm not drawing all the values here. Sorry. 1,000, up to 2,000. We receive two ACKs, so it goes 2,000, receive an ACK, 3,000, but immediately receive the next ACK, so up to 4,000. So the values I'm showing on this picture are approximately every round trip time. In the first round trip time, we use 1,000. In the second round trip time, 2,000. In the third, 4,000 and then 8,000. Approximately every round trip time, that's the value of the congestion window. Now I say approximate because it depends upon the sending rate. It's not exact. It's the idea, every round trip time, double. And we see it on this diagram as well. Doubling here. And then slow increase, the additive increase. And what's the top line? This line we're drawing is the window size. Remember, the window is in fact the minimum of the congestion window and the advertised window. In this example, I set the advertised window to 24,000 bytes. So once we reach 24,000 bytes, the sending rate is limited by the advertised window. So the amount that the source is allowed to send, if the advertised window is 24,000 bytes, we're limited by the minimum of the congestion window and advertised window. Advertise is 24,000. Congestion is 1,000, so 1,000 is the limit. 2,000, 4, 8, 16. Congestion window is the limit. It's the minimum. 22, 23. Once we reach 24, even though the congestion window is going up, in fact we're limited by the advertised window now. So in practice, both of the windows control the sending rate, the smallest of the two. That's the sending rate because the sending rate is proportional to the congestion window. In simple terms, it's just the window size divided by the round trip time. And we've assumed the round trip time is constant in this case. So same shape with different scale here. Window size, sending rate. In the last five minutes, let's just introduce this last concept. And this is where we reduce our sending rate in the presence of congestion. So up until now, we've been increasing the sending rate. But now what happens when there's a packet loss? Well, we want to reduce the sending rate. And there are two types of packet loss, remember. Packet loss due to three duplicate acts, or detected by three duplicate acts, or packet loss detected by a timeout. And we said packet loss detected by a timeout is worse. That means it's an indicator of more congestion. Therefore, we'll slow down more in this case. When we have a loss detected by three duplicate acts, we'll slow down a little bit. And a loss detected by a timeout will slow down by much more because we think there's more congestion in the network. And there are different parts of the algorithm. Loss detected by three duplicate acts, what we do is we have this slow start threshold, this fixed parameter here, this parameter here, which tells us when to move from slow start into additive increase. What we do is we have that value. So in this example, if it was 16,000, it would drop down to 8,000. We have the slow start threshold. We set the congestion window to the new value of this slow start threshold. And then we enter additive increase and continue going up slowly. If there's a timeout, we have the threshold. We set the window down to the minimum value, one MSS, the initial value that we started our connection with. And we enter slow start. That's best shown in the diagram here. So we start our data transfer at the start of a connection, one MSS, slow start phase says we can double every round trip time. Then we reach our slow start threshold, this green dashed line here, which means we're now into additive increase. Then we have a packet loss. That's an indicator of congestion. In this example, it's a packet loss due to three duplicate acts. We halve the threshold, drop the window down to the new value, which is 8,000, and then additive increase. Keep increasing. Then if there's another packet loss, in this case due to timeout, that's an indicator of more congestion. So what we do is we halve the threshold again from 8,000 down to 4,000, and we drop our window right down to the start, down to the initial value, one MSS. And we're in slow start. We reach the threshold, and then additive increase. And we keep going until there's another packet loss. Threshold is halved every loss. So you see with several packet losses, it can go right down quite low. And that's why maybe in wireless LANs, when you're transferring a large file, with some packet losses over that wireless link, your throughput can drop down because TCP is dropping at sending rate. So this is a combination of all those mechanisms in play. We have slow start, additive increase. We have a slow start threshold, which tells us when do we switch from slow start to additive increase. They are the way that we increase our sending rate. And when there's a packet loss, we reduce the sending rate. If it's three duplicate acts, halve the threshold and start from there in additive increase. But if it's a worse indicator of congestion, that is a timeout, halve the threshold and revert back to the slowest sending rate. And then start from there. And hopefully we build up until there's less congestion. And if there's another packet loss, then we'll follow that mechanism. And that's the main mechanisms for TCP congestion control, at least in the very basic form. There are some more complex ways to variations of the algorithm, but that's the basics of TCP congestion control. And if you, over a long time period, transferring a large file, then you see it's going up when there's no congestion and drops and if then it goes back up again and trying to maintain a level such that avoids too much congestion in the network. Try and understand those different mechanisms. Understand the idea of additive increase, slow start, and the idea of how we respond differently in the event of packet loss due to timeout and due to three duplicate acts. Do you need to remember the algorithms I think? If you can remember the ideas and be able to compare them, then that's a very good start. Any questions? We've still got a few more slides, but that basically finishes on TCP. What we'll do next week is just recap on that. Summarize. Too late, we've finished, you've missed all the fun parts. We will summarize and move on to our next topic. So any questions to finish on TCP?