 So we've gone through TCP flow control. That's what the last last week's lecture what we're on and We're not going to go through any more detail But we've mentioned earlier that TCP has a reliability mechanism in that we send data We expect to receive an act back if we don't then We'll retransmit and there are some details about when to retransmit basic and fast retransmit So the really the three main features of the TCP the reliability mechanism how to retransmit when flow control which is about making sure we do not overflow the receiver computer and Congestion control which is what we're going to go through now And that's about making sure that we do not overflow the internet the network Because TCP is an end-to-end protocol. It's between one host and another host Flow controls to make sure that the source host doesn't overflow the destination host But between the hosts are routers. I have our network a path and those routers have limited capabilities Congestion control is to make sure that when the sender sends its TCP segments We do not overflow the capabilities of the routers along the path that is inside the network Avoid congestion in the internet or in the network. So we're going to go through how TCP congestion control works First by looking at the causes of congestion and then how do we respond? Congestion is easy you you experience it every day especially driving if you drive to this campus recently as you arrive on to turn left into Industrial Park here on Tivon on Road. There's always a lot of in the last week or so. There's a lot of Traffic piled up. There's a traffic jam at the intersection. Why is there a traffic jam at the intersection? Where are we out there? There's constructions. Okay, specifically what why compared now compared to maybe a month ago Yeah, there's construction on one part of the road, which means you're right Is decrease the the lanes? Okay, normally there was Three lanes available one lane turns left into us and two lanes go straight ahead and Now it's cut down to two lanes because they're doing some road works and the result is that cars line up for a long time at that intersection and they we have A long delay to wait. This is an example of congestion in a road network traffic congestion the same concept applies in in the internet Except of course, we don't have roads We have our paths between host and source host and destination host and we have the routers sort of like the intersections so the reason for the congestion at the intersection out the front is because Coming in to that intersection or that yeah the intersection there the traffic lights If you go back back to St. Carlos the intersection there the bridge there There are three or four lanes of traffic coming in and Then it cuts down to just two lanes So we've got a lot of cars coming in three or four lanes of traffic coming in and they all need to pass through an intersection which has just two lanes So what we can think is for this intersection the rate at which we can send cars out We only have two lanes of cars that can go out But coming into that intersection or at least going back a bit There's three in some cases four lanes of cars coming in and of course you cannot Get any more out of that intersection than two lanes worth So the rate of cars that come in if it's greater than the rate at which we can send out then cars start piling up and The queue gets longer there at the intersection and when the queue gets longer. What happens? A your car must wait for a long time before you can turn or keep going so your delay goes up How would you avoid avoid that How would you avoid this congestion Go on a different route. Okay. Don't go through that intersection. We know that there's going to be congestion there So go on a different route and go on a different path to avoid it. Well, it's not so hard It's not so easy to avoid here because there's only one way into Bunker E So but in general if you take a different route through the network through the road network You can possibly avoid that congestion Okay, we cannot take a different route. So what can be done to? Deal with that. How could we reduce the congestion at that intersection with assuming we cannot take a different route? Reduce the rate at which the cars come into that intersection. The congestion is because Only two lanes can go out. We cannot change that that's fixed If the number of cars coming in is larger than two lanes In some period of time then we start to get cars queuing up So the only way that we can deal with that to reduce it is to somehow Get the cars to slow down coming in to this intersection maybe at a previous intersection slow them down there so that rate at which cars arrive is no more than two lanes and Okay, then that moves the problem effectively back to the previous intersection So if we can tell the previous intersection don't let cars come into this portion of the road Greater than their speed then they will not pile up at our intersection and That can continue on The same thing happens in the internet where it turns out our routers Which must send data across a path? Congestion may occur at those routers Data comes into those routers the routers can only send data out at a certain speed if the rate at which data comes in Exceeds the rate at which they can send out. There'll be congestion at that router What will happen is that the data will queue up in the inside the router the router will have a queue some buffer space as the data or the packets arrive at the router if They're arriving too fast they will be stored in the queue the queue will build up increasing the delay of those packets How do we avoid that Send the packets on a different path. That's one way if we cannot do that then try and tell the one that's sending the packets to us to slow down and In the same way to avoid the congestion at our intersection Somehow get the previous intersection to slow down the traffic come into our portion of the road So tell the one before to slow down its sending rate and if you look at the internet and a path between Now if we return to TCP Our routers our circles our hosts our source hosts and destination hosts Sources sending data to the destination if we're getting congestion at this router here Packets coming in too fast then one approach to Assuming we cannot take a different path one approach to reduce that congestion is to tell the previous one to slow down Don't send so much because the amount coming into this router exceeds the the rate which I can send out and eventually Who is sending the data will it's a source TCP that is sending the data So in the internet congestion control relies on telling the source to slow down So that it stops overflowing the routers inside the internet So if it congestion is experienced in the internet With TCP there's a mechanism so that the source TCP can reduce its sending rate and as a result it reduces its sending rate and The amount going into the router decreases and therefore there's no longer congestion So that's the summary or that the idea of congestion control Congestion occurs if the amount coming into the routers exceeds the rate at which they can send out What happens is that the queues build up Eventually if the queue is full of packets packets will be dropped Is that as the queue builds up the delay goes up? That's bad for some applications and if the queue gets full and More packets arrive packets will be dropped again. That's bad for applications and especially for TCP So to avoid that congestion When it does occur Tell the source to slow down We need to go through and explain how TCP detects that congestion and how it does slow down And see how that impacts on our performance So let's go into a bit more detail about how that works So what causes congestion in a network now in the internet? when the number of packets being transmitted through a network Approaches the packet handling capacity of that network. What does that mean? So here we're talking about number of packets now not number of cars or it's easier to deal with packets packets of data being sent When the amount coming into the network exceeds how many packets That network can carry the capacity of that network The capacity of the network or the packet handling capacity is some speed at which this network Or a path can deliver packets The capacity if the amount coming in exceeds that will get congestion for a particular network What is the capacity of that network? It turns out it's very hard to determine It depends on many factors because The capacity of a link is easy to determine It's usually a characteristic of the technology the capacity of the link between two computers This may be 100 megabits per second But when we have a network with many links, what's the capacity of that network? Well, that's much harder to determine So even though we say this It's sometimes difficult to know what the capacity is in advance It may vary even. So we say congestion occurs when the amount transmitted through the network or coming into the network exceeds the capacity or approaches the capacity approaches the limit one thing we'd like to know is okay if the capacity is here and the sending rate coming in is low and Then we increase the sending rate as we approach the capacity. What happens? That is what are the negative impact of that? Well, we may guess that our packets may be delayed at routers We may get packet drops by how much is is an interesting question that We will see when we look at TCP what we want to do is Avoid congestion We don't want congestion So congestion control aims to keep the amount of packets coming in Less than some level such that if we approach the capacity the performance starts to drop off the performance in terms of delay and throughput So if this is the capacity of our network We would like to keep the sending rate a little bit lower than that Such that the performance is still good Because if we increase the sending rate a bit more than the performance may drop now, what is that level? How much lower than the capacity should the sending rate be again? That's difficult to determine but just the concept in the internet Congestion is caused by too many sources trying to send data at too high a rate Coming back to my Network here. It's a simplified network Routers normally have multiple More than two links. They may have two links, but some routers will have multiple links to other routers Okay, so this concentrate on this router. There are multiple links to other parts of the network There may be hosts up here down here everywhere They're all sending data if we look just at this output link from this router. It has some capacity Let's say give it a number ten megabits per second. That is router R can send out At ten megabits per second and let's say that there are many hosts sending Towards this router. So there's packets coming in or data coming in from these other directions so if The sources connected to these routers are sending in a such that the combined rate coming into this router Approaches the capacity of the output this ten megabits per second Then we'll start to get congestion Too many if there are too many sources trying to send data at too high a rate That is there are many sources here sending data all going in this direction Many here sending such that what comes into this router Starts to approach this output capacity Then packets will start to be queued at the router They'll be delayed and possibly even dropped if the queue limit is reached So Too many hosts sending at too high a rate causes congestion the result Packets get dropped at routers. That's a bad thing because When packets are dropped for TCP When we don't get a packet to the destination TCP will eventually retransmit So if we don't get an acknowledgement so the data packet is dropped It doesn't get to the destination the destination will not send an act so the source will eventually retransmit Now that can be a problem We've sent a data packet it was dropped because of congestion our response send another data packet retransmit if everyone keeps retransmitting Then that also causes congestion because it's another transmission. It's more data being sent so if we lose packets and TCP then retransmits those packets Then again, we get congestion at the router and if you keep re retransmitting more retransmissions causes more congestion and More congestion causes more packet losses because the queues fill up at the routers and more packet losses lead to more retransmissions and if you keep going eventually Everything that's being sent is just a retransmission of some original data and nothing is delivered and the internet fails So if we don't have some form of congestion control with the current mechanisms for TCP Eventually that everyone would be sending too much data and nothing would be delivered to the destination and The internet would not work as it does today So a key part of how the internet works and performs is how TCP performs its congestion control How when we recognize congestion somewhere in the network We get the source to slow down its sending rate if we slow down then we reduce the congestion until it disappears so that's TCP we aim to reduce the rate at which the source sends to avoid congestion or control congestion, let's Look at a different example before this one You don't have this but very quick. I think you'll see the point here's our router and It's got an output link and there's two input links coming from A and B a Router some output link A and B want to send data packets in this direction and A and B send packets at some sending rate and then we'll vary it and see what's the impact that Sending rate. Let's put some numbers to this. Let's say the output rate of this link is Let's say 100 kilobits per second. Okay, so 100 kilobits kilobits per second is the output rate of this link and What do we have? Let's say our packets for simplicity a 1500 bytes ignore headers and so on but let's say a packet is 1500 bytes long What's the output rate in? Packets per second the output rate is given in kilobits per second. What's the rate in packets per second? How many packets per second can my router send? 120 I can't remember the calculation. Let's go through. All right. We need to convert to the bits. Okay 1500 bytes How many bits? 12,000 One packet is 12,000 bits Okay, one packet is 12,000 bits We can send a hundred thousand bits per second One packet is 12,000 bits. How many packets per second? Approximately Approximately eight eight packets per second or sometimes written as PPS Packets per second It's just another unit if we know the packet size we can also express the speed in or the rate in packets per second In most cases the packet size may vary so it's not so common, but some devices will Report in packets per second Eight because eight times 12,000 is 96,000 bits So about 100,000 it's eight point something. Let's keep it simple The reason I converted to packets per second is because sometimes it's easier to think okay How many packets and in these diagrams? I'll draw a packet at a time. Don't worry about the rate in bits or bytes, but the packets that we can send so If we have a link rate We can convert that to a packet rate if we know the packet size Any questions before I write wipe this off. It's just the a conversion convert our rate to bits per second in that case and Divided by the packet size Let's record it in packets per second My router every second can send eight packets as it receives a packet from either a or b It may have some Q in the router so some memory packet comes in and If it's not already sending a packet it will send that packet out If it's currently transmitting a packet then that one that comes in will be queued for a short time And then send so you send one packet at a time at a speed of eight every second what I've drawn on this picture is an example of a and b over time sending packets They arrive at the router R and R sends them out the third line is a R transmitting the packets coming out here and The top two lines at a and B sending packets in you'll see a collision. Yes, let's get to it Now when does a send a packet? Let's say a is some computer that you're sitting at Using a web browser B is another computer of your friend. When do they send packets? Well, it depends upon the user what's happening at that computer, but let's say it's random Okay, they send packets at some random interval so a sends a packet and then Sometime random time later within some range it sends another packet so it may vary as the interval between sending packets in this example Let's assume that a is sending At an average rate of one packet per second and so is be an average sending rate average means It's not necessarily sending one packet and then one second later sending another one and One second later another one, but if you measured over say a hundred seconds There'd be about 100 packet sent The interval between them may vary But on average one packet per second being sent Router can send out eight packets per second When does a send packets? I've just randomly chose some time slots here So I've tried to create this diagram. So here's one second. There are eight intervals here Here's another second. So here a transmits a packet It arrives at the router and the router transmits that packet out. So this is a packet coming out and there's nothing transmitted because A is not sending anything B is not sending anything the router does not send anything out Until B sends a packet the router sends a packet And then some time later B sends an another packet the router delivers it and So on and as you noted before in this instance A and B send a packet to the router at the same time. What happens? So two packets think that two packets arrive at this router at the same time The router can only send one at a time out. So it chooses one. Let's say randomly and The other one waits in a queue and The other one is transmitted after the first one. That's what I've tried to show here Two packets arrived at the router. The blue one was sent first. The green one was put in a queue and Then the green one was sent in the next time slot. So there's a small delay incurred here when we get This case of they both sending at the same time There's a small delay for one of those packets at least a queuing delay It's not a collision like we think of wireless LAN because remember there are two physical interfaces here It can receive both of them at the same time Because that two cables coming in it receives on each cable or each interface puts them both in memory and Immediately transmits one While the other one waits in memory and then transmits that one the focus on wires You can do the same with wireless, but there are separate interfaces We don't have collisions on the the radio medium medium Let's change the scenario increase the rate at which a and b send to two packets per second So the first simple example they were sending at one packet per second Let's increase the rate at which they send and I've just gained randomly chosen some slot slots some time slots when they send a Sends a packet then a bit later. It sends another one b sends to here and you just follow through okay the router Receives from a and sends immediately Everything's okay We get this concept of two packets arriving at the router at the same time So one has to be queued and then sent so a small queuing delay and Here it's a little bit worse two packets arrive The blue one is sent the green one is queued The queue one is then the queued packet is then sent But at the same time another packet arrives. So that must be queued as well and then sent Okay, so there's two packets waiting in the queue, but not at the same time there Increasing that sending rate What was the impact on the performance there? How could we say increasing the sending rate impacted upon the performance? What changed in this very simple case between the first one where they're sending at one packet per second? To the second one where they're sending at two packets per second any packets being dropped Not not really they're being queued, but there's nothing that's discarded by the router The difference is that there's a slight increase on the average delay If you look at the delay of every packet in the first case Just one packet had an extra delay in the queue in the second case increased sending rate Three packets in this simple example had an increased delay So a small increase in the delay in this case the router is going okay because it has an Output capacity of eight packets per second But we're only sending about half of that in this case. We're only utilizing about four packets per second Four packets per second coming in So here's the an example where A is sending at an average rate of four packets per second B is sending an average rate of four packets per second the capacity that the router can send is eight packets per second And we see what happens Here we're lucky and that they're sending at different times so the router can immediately transmit And we're lucky, but it then here AMB send at the same time one of them has to be queued. So this is the queue down here When that green one that was queued is transmitted there are another two that arrived so they also have to be put in the queue and Sent in the next slot one at a time the blue one here this green one transmitted here in the meantime another one arrived and All right, we'd have to keep going, but we see that the packets that must be queued is much more than in the previous case so in these simple examples we see that as we increase the input rate the input sending rate the Queuing delay at the router increases And it gets more complex in this, but especially when you have multiple coming in What's the output rate in this case? How fast is the router sending? Approximately eight packets per second it's fully utilizing its output link its output link supports a capacity of eight packets per second Coming in is a total of eight packets per second We get to send out we set for a space here pretty much eight packets per second So that's good We're a hundred percent Utilizing the capacity the bad thing is that the delay is going up So that's the bad point here Whereas in the previous one With a smaller input rate Our efficiency was low. We were only sending about four packets per second, but the delay was also quite low, which is at the advantage Increasing the sending rate increases that delay AMB sending at eight packets per second Router all right. This is simplified the router is still sending at eight packets per second out But many packets being queued as One is sent out another two come in the queue Become infinite size, okay Yeah, why did we alternate basically I? Couldn't be bothered keep drawing this diagram, okay? so We see as we increase the sending rate the queue size will get larger now in practice a router has a limited queue size It's fixed So Let's say we have a limit on the queue size we can only store four packets no more So here's our queue size limit Once we have four packets in the queue and another packet arrives We'll have to drop a packet So there'll be nothing that we know more than four packets in the queue So in the same as this case if we had an infinite queue size the queue would get larger and larger but in practice the queue size is limited by the amount of memory we have and Packets will be dropped So these empty boxes indicate packets that are dropped and as packets are dropped in TCP Eventually The source will have to retransmit those packets. So these Rs indicate Some packets are being retransmitted They're not original packets, but they're retransmit retransmissions of those original packet drop packets leads to retransmissions and less original packets being delivered that That is trying to demonstrate that We need to maintain the sending rate below some level Such that the queue size is small enough to not cause a large delay and not cause packet drops Okay, I am B send to the router at the same time Two packets arrive at the router Let's say they're in memory as they arrive the router can transmit only one at a time Which one does it transmit up to the router in my example? I chose the blue one first but so in my case I chose so Blue and green arrived transmit the blue one first the green one goes in the queue Then in the next time instant, there's the green one to be transmitted. That's this one Plus another to arrive and put in the queue. So take it first in first out so This one is in the queue the next to a rock that arrived go to the end of the queue and The first one is transmitted It's a normal approach Which one is transmitted if they both arrive at the same time it's up to the router to choose You wouldn't impact on the performance in this case So that's congestion or that the the impact of congestion increase Queuing delay increase delay for our applications increase packet loss and That's bad for performance So somehow we need to tell a and b to slow down When we detect something like this happening tell a and b to slow down and that's what TCP congestion control does these few slides are Trying to demonstrate the same thing. I will not go through them in any detail. They try and put what I explained in those pictures in Using a different example and using some different notation, but they have two inputs a and b to a some router and They're sending and that they try and analyze. Okay, what happens to our performance and Try to consider different scenarios I would rock go through them The analysis can get quite complex when you have more complex networks What we just covered is one router two inputs one output Here's a very simple network, then you can start to analyze and see well what the performance will be We're not trying to do that analysis queuing theory is an entire topic some You could take one course to study it Where do we get to? what generally can happen is that if This is the sending rate the simplest weight of you this diagram if this is the sending rate The amount coming in and this is the throughput the amount coming out here then We have some capacity which is not shown on this diagram We have some capacity as we increase the sending rate our throughput is okay. It's a proportional We increase the input then the output increases as well But then we approach a point nearing capacity of our link and It turns out that because we have packet drops and retransmissions the throughput approaches some capacity and in fact if you keep increasing the sending rate Because you have more and more retransmissions the throughput drops off That is if you just keeps increasing the sending rate without congestion control eventually the throughput will drop to zero Because everything is just retransmissions. There's no original data getting delivered So that's bad having a throughput of zero when you have a capacity of ten megabits per second is very bad So the idea in congestion control is to make sure that the sending rate the amount coming in is Somewhere to the left of this point If a sending rate is past this point, then we see our throughput goes down But if it's below to the left of this point Below this value then the throughput is proportional to the input sending rate So we need to manage the sending rate such a throughput doesn't approach this capacity and then drop right off Called to called sometimes congestion collapse the network collapses That's just the concept there that the theory behind that will not go through so Let's summarize what we know. Does this summarize very useful? I think this doesn't capture what the main point main point is that as we increase our sending rate queuing delay goes up potentially packet drops go up and Without congestion control if we have packet drops we have retransmissions which is more data being sent and Less original data being sent and the throughput will eventually go down So we need to keep the sending rate below a level which causes too many packet drops and too many retransmissions That's the main point Or in this diagram keep the sending rate below this level Which sees the throughput drop off again the number of outgoing links here Yes, but normally what you can do is treat them independently that is when I look at this router It's connected to many other routers. So there's traffic coming in all directions Imagine there's traffic coming in this direction going out here coming in here going out there depends on who's sending to who You can normally analyze Consider just one output direction look at all the sources sending on that output link and analyze from that perspective and Get similar results to what we're trying to show in the example, but of course There may be no congestion on this output link, but there may be congestion on this output Okay, so it does depend upon who is sending what if we think now from TCP's perspective from one source To one destination what we care about is this path only We want to get our data across this path We don't care this router sending out here and out here But we care about the router sending across this link But that will be the speed at which we can do that will be impacted upon what's being sent into the router so we do need to consider the whole network, but We can normally in terms of analysis consider one output link at a time That fully answers your question Anything else any questions arrive late maybe congestion at the traffic intersection So we need a way to control congestion how Quickly some approaches Somehow tell the sender to slow down that's the idea to basic approaches and to end congestion control That is the end systems the source and destination are involved for example the destination Tells The source to slow down So in the path between source and destination if we detect some congestion Then somehow the destination will inform the source you should slow down To avoid congestion and we'll see that's what TCP does The alternative is That the routers themselves play a role in congestion control and as this router detects congestion it tells the previous router to slow down and So on So the routers detect congestion and Inform the previous hops in the path to that there's congestion therefore slow down and there's this sort of back pressure Because if I tell the previous one to slow down Eventually that will tell the one before that to slow down and eventually will tell the source to slow down So network assisted congestion control involves the routers detecting congestion and Telling others to slow down We're going to focus on End-to-end congestion control. That's what TCP uses in the internet Network can network assisted congestion control is only used in very specific networks Not in the general internet for example in one network owned and operated by one organization Let's look at TCP So what we need is now we've got a TCP source running on this computer destination on this computer that source is sending data to the destination and Across some path then if there's congestion In the internet in that path then we need to somehow tell the source the sender to slow down to limit its sending rate So the sender limits the rate at which it sends based upon the perceived network congestion If this one thinks there's congestion in the network it will slow down and We'll see also if it thinks there's no congestion Or the congestion is going down. It will actually speed up its sending rate So it will adjust its sending rate depending upon what it perceives as what's happening in the network If there's a lot of congestion slow down your sending rate if there's no congestion increase your sending rate and If congestion occurs then slow down again and sort of you get some feedback and adjust the sending rate accordingly, that's the idea for that to work We need some algorithm to say how the TCP sender should adjust its sending rate How should it slow down? When should it slow down? By by what amount should it slow down if it's sending at a hundred kilobits per second and There's congestion detected Should it drop to 50 kilobits per second 25 kilobits per second 99 kilobits per second So by how much do you slow down when there's congestion? We'll have to cover that The other problem is How do we know that there is congestion in the network? this perception that The source doesn't know the set of routers between it and the destination If my computer is the source and the Facebook web server is the destination What are the routers between my computer and Facebook? I don't know I could run some tests to try and find out But it may be different today than tomorrow So How do I know if there's congestion in the internet or at least in my path between source and destination? We need some way to measure that and how do we respond? So there's some algorithms for responding Maybe I confused them. How does it limit the sending rate? Is not how it responds. So the congestion control algorithm is how to respond if congestion is detected How do you respond? How do you limit the sending rate is? Well in TCP what do we use to make sure that or to control the sending rate? slightly different as we'll see in Flow control how is the sending rate controlled? Remember the sending rate when we looked at TCP flow control What limits or controls the the sending rate some window? What was it called? a WND the advertised window we said if we remember back at one stage we said in TCP flow control We had this advertised window a WND and That was a value that was set by the destination that tells the source. How many bytes it can send per round trip time? so the sending rate for flow control was controlled by this parameter the advertised window and we did some calculations and saw the the throughput or the sending rate was Advertise window divided by round trip time We cannot control RTT, but by controlling the advertised window. We control how fast the source sends we need a similar mechanism in congestion control and We'll introduce a congestion window CWND a separate variable parameter So the source or TCP has both flow control and congestion control. We treat them separately, but they are interrelated So from flow control The amount of bytes the sender can send is limited by the advertised window Okay, in fact, there's also this other window called the congestion window So in fact the amount of bytes that you're allowed to send is Limited by the minimum of these two windows So with flow control we have the advertised window. We'll see with congestion control We have the congestion window So the source TCP has two parameters now The advertised window is set by the TCP destination When the in flow control when we send it an act back the window It's just called the window in that field TCP header The window is sent in that act and that's the value for AWND the advertised window The destination advertise how many bytes you're allowed to send according to its buffer size. That's flow control This is a sieve now there's another parameter congestion window and We'll see there's an algorithm for determining what value that should be And in fact the amount that the TCP source is allowed to send is the minimum of these two The number of bytes or outstanding bytes before we have to wait for an act is the minimum of the advertised window and the congestion window We have two values now We're gonna focus just on the congestion window the flow control is The flow control algorithm sets the advertised window Let's assume the advertised window is very big Let's say the window the buffer size here at the receiver is 10 megabytes and it's always available That means the advertised window value would be 10 megabytes and let's assume that that's very big compared to the congestion window Congestion window is always smaller than 10 megabytes, which means the minimum of the two is always the congestion window The sending rate although it's not shown on the slide the sending rate Let's make some more space right this clearly Approximately so for flow control. We have an advertised window That controls how many bytes I can send before waiting for an act With congestion control. We have a congestion window which also controls how many bytes we're allowed to send We're in fact allowed to send the minimum of those two The minimum of the advertised window and the congestion window is how many bytes were allowed to send before waiting for an act So the sending rate is Approximately the minimum of those two divided by the round-trip time That's the same as we used when we analyzed flow control The sending rate was our advertised window divided by round-trip time Well now it's a bit more complex It's the minimum of the advertised window and the congestion window Divided by the round-trip time whichever one's smaller What the slide says here if the buffer at the receiver is very large The advertised window will be very large Let's say always larger than the congestion window as a result the sending rate if we look just at congestion control Will be the congestion window divided by the round-trip time. We had a question How does the TCP send a limit limit its sending rate? well the answer adjust The value of the congestion window if we want to send more increase the congestion window and If we want to send less because there is congestion decrease the congestion window if AWND is 10 megabytes and CWND is 1 kilobyte 1,000 bytes the minimum is 1,000 bytes Sending rate is 1,000 bytes divided by round-trip time If we're allowed to send faster increase CWND to 2,000 bytes for example will increase the sending rate so that's the first point in TCP congestion control the source controls how fast it sends based by adjusting Adjusting CWND the congestion window that was the answer to the first question Later or after the second question we will come to see well How do we adjust it? What value do we set it to? But first how do we know there's congestion in the network? How do we know there's a router dropping packets or a queue is going up? How do we perceive network congestion? Well, TCP makes an assumption we assume if I send data and I don't receive an act or more generally I detect a packet loss I've sent some data I Don't receive an act in time therefore there's I assume there's a packet loss TCP sender assumes that packet loss indicates increased congestion Which makes sense in in simple cases? Because what we said is that when congestion goes up There's more chance of packet loss Okay, we saw on these diagrams When the sending rate is small the queue size is small no packets lost Increase the sending rate a little bit more congestion, but still no packet loss Here high sending rate a lot of congestion we get many more packets dropped So the idea is that if there are packets dropped Lost that indicates there's congestion in the network So what TCP sender does is whenever it detects a packet being lost It assumes the congestion is increasing and it responds accordingly and now we come back to Our understanding of TCP retransmission scheme There are two ways that we detect a packet loss basic retransmit and fast retransmit With basic retransmit. There's a timeout. I send my data. I'm waiting for an act I'm still waiting. We're waiting for an act No act arrives in some time period. I time out. I assume that packet was lost Yeah, that's one indicator of a packet loss. You don't receive an act within a timeout period But we also saw that there was this fast retransmit scheme Where I send some data. I Receive an act with one act number. I Receive a second act with the same act number and in fact I've received three duplicate acts Under that case I assume my packet was lost So there are two indicators of packet loss and we'll use them as to determine how to respond Okay, quick test What is the sender going to do to CWND? What is the sender going to do with the congestion window when there's a timeout? So I'm the sender. I've sent some data. There's a timeout. I'm waiting for an act timeout What do I do with CWND and the hint? The answer is either increase keep it the same or decrease There's a timeout. Okay What what's our goal? So I've just sent some data There's a timeout that tells me that a packet was lost Well a packet being lost tells me there's been some increase in congestion and Where there's an increase in congestion I need to slow down Okay, when I detect that there's a lot of congestion there To reduce the congestion. I need to slow down my sending rate So the answer is we decrease the congestion window When there's a packet loss I need to slow down to avoid more congestion So to slow down to reduce the sending rate. I reduce the congestion window Round trip time. Let's say it's fixed Sending rate is the minimum of the advertised window and congestion window divided by round trip time If there's a packet loss, there's more congestion when there's more congestion. I want to send less I want to reduce my sending rate To reduce my sending rate reduce the congestion window So there's the first method for responding if there's a timeout reduce the congestion window What if there's three duplicate acts? What do I do if there are three duplicate acts? What will the sender do? Was that a guess? Again, if there are three duplicate acts the sender will do what? What does three duplicate acts mean? Three duplicate acts. I don't know if we have the slide but was the the mechanism for fast retransmit It was that case where I sent many packets I received an act with one act number and then I see received the same act three more times three duplicates and Then I retransmit the reason I retransmit is because that's an indicator of packet loss So if I receive three duplicate acts It's also an indicator of packet loss and I also decrease decrease the congestion window compared to the timeout How much do I? Decrease by Decrease is the answer for both of them both of them. We decrease the congestion window Now compare them Let's note this We got timeout Means there's a loss means decrease the window CWND and similar three duplicate acts also means a loss and Also means a decrease of the congestion window There are two events That indicate packet loss to the source and in both cases we decrease the congestion window We do some other things as well in fact Which of those two cases? Do I decrease the congestion window by more? Which one causes a larger decrease in the congestion window So the hint is that they decrease them in different ways One of them decreases more than the other which one timeout why With a timeout again Timeout means data loss for sure. So does three duplicate act Okay on the right track The idea is that in most normal scenarios the The event of three duplicate acts will happen most of the time Okay, that is I send a lot of data It's likely that I'm going I'm going to just receive three duplicate acts if if some of my data is lost but if for example multiple packets are lost if I don't receive three duplicate acts I may Have a timeout So like you said if there is more data loss Then I may not get the third duplicate act before the timer expires So when there's more loss It's Possible that the third duplicate act will not come but eventually a timeout will occur so what that indicates is that The event of three duplicate acts occurring is an indicator of a small amount of loss The event of a timeout occurring is an indicator of a large amount of loss With a smaller amount of loss There's a small increase in congestion Therefore, we have a small decrease in the congestion window When the timeout occurs, that's an indicator that there's a lot of congestion Maybe there are many packet losses more packet losses means more congestion. That's bad and The way to respond if there's a lot of congestion is to slow down a lot That is decrease the congestion window by a large amount large So in fact we respond differently depending upon which of these two events occur That's the idea All of this relies on the fact that a packet loss Indicates there's congestion in the internet or in the path It's not always true in many networks, it's true But think of your wireless LAN With wireless LAN, I'm sending from my laptop to the access point maybe to some server What causes packet losses? You did some experiments last Last assignment you I don't know if you tested packet losses But you may have noticed the variations of throughput maybe packet loss in iPerf In the wireless LAN, what do you think may cause a packet loss? Interference so poor signal collisions between other people transmitting so if everyone's out in the cafeteria doing their assignment at the same time Then there may be collisions and packet loss across the wireless link That's That's not caused by congestion at the router That's caused by the characteristics of the wireless link. So it's not an indicator of congestion So a packet loss in some cases does not mean there's congestion in the network so normally in The internet in most cases if there's a packet loss it indicates congestion But not all cases in some networks a packet loss is an indicator of something else happening something bad, but not congestion and in those cases TCP may not work well Because TCP assumes if there is there is a packet loss. There is congestion and I slow down and And I don't know if you've noticed but in some cases when you download large files across a wireless link If there's a poor signal Many packets lost Your throughput your download rate can drop very low So you get bad performance and that's because of the way to TCP response So there are three things that we need to cover with TCP congestion control How do we limit the sending rate? answer Change the congestion window this parameter Higher value higher sending rate lower value lower sending rate How do we know there's congestion? packet loss and in fact two types of packet loss packet loss due to timeout packet loss through due to three juptica acts What do we do? When we have a packet loss, how do we change the window? Well, that's what we'll cover tomorrow the algorithm for how do we increase and decrease the window upon packet loss and other events We'll start that tomorrow Let's stop there. If you have questions we can discuss Tomorrow will cover the algorithm for increasing and decreasing the window and Finish on the performance of TCP. Hopefully tomorrow you