 In the previous lecture we just got a brief introduction to TCP. I think everyone has seen some parts of TCP before. Today we're going to go through flow control, and then after that we'll go through congestion control. So the two different features will go through separately. In fact they are related. They both use, they may impact upon each other. Flow control we finished within the last lecture is the idea that the source computer sending data to the destination computer, the destination computer has a certain amount of buffer which is memory inside that computer allocated to receive packets. So as the destination receives packets, TCP segments, it puts them into the buffer and then the computer, the CPU goes to work, does some processing and eventually sends to the application. That takes some time. The problem or a potential problem is if the source computer is sending so fast such that the receiver receives TCP segments and is putting them in the buffer and is slow to process them, this amount of memory allocated is limited and eventually fills up and the receiver will have to drop segments, which is bad for performance because dropping segments between source and destination means retransmissions are needed, which means longer delay and lower throughput. So the idea in flow control in TCP is to avoid that situation. And the way to avoid it is quite simple. The receiver tells the sender how much space it has available in its buffer. So if we have 1,000 bytes available in the receiver buffer to receive 1,000 bytes of data, the receiver somehow has to tell the sender, I've got 1,000 bytes available and the receiver will not send more than 1,000 bytes. And if at a later stage more space becomes available at the receiver because the data that's been already received is processed, and now we have 10,000 bytes available at the receiver, then somehow the receiver needs to tell the sender I've now got 10,000 bytes of space available and the sender can send up the 10,000 bytes of data. So the receiver controls the flow of data from the sender. We don't have many lecture notes in this set of lecture slides. There's not nothing on flow control, there's another handout that you have that we'll go through that explains with a detailed example flow control. But before we go through and explain how it works, let's just remind ourselves in every TCP segment we have the source and destination port, a sequence number and some other fields. Importantly in our flow control we have a window field. It's 16 bits long. So we've got a four byte, two byte window field, 16 bits. This is used, the value in there is used by the receiver to tell the sender how much space it has available in its buffer. So source sends data to the destination. Then destination knows how much space it has in its received buffer. Let's say it's 1,000 bytes and when that node sends an acknowledgement back including this TCP header, inside the window field it sets the value to indicate I have 1,000 bytes available in my buffer and that way the source knows how much available space and how much it's allowed to send. So the window field in the segment is used to indicate back to the source how much space the receiver has in its buffer. We're going to go through an example with some pictures from a different handout. You have them where? In this handout TCP bandwidth delay product and TCP. The bandwidth delay product, BDP. So you have this handout in front of you in your lecture notes we're going to go through some pictures from there just to explain how flow control works. Here's one of the pictures, a diagram to illustrate a very basic flow control that is used in TCP. We have a client and a server. The client wants to send data to the server in this case. Source, destination, sender, receiver, client, server. And across some internet. Okay, so there may be multiple links between client and server. And in general with what we do is we, instead of go back in a stop and wait protocol we can send one data frame, one data segment and then we must stop and wait for an act to come back and then we can send another one. TCP uses what's called a sliding window mechanism which allows the source, the client, to send multiple data segments and then eventually once it reaches a limit it must wait for acts to come back and then it can send more data segments. This is just showing that general mechanism where in this simple case the client sends four data packets, data segments. They propagate across the network. It takes time, so the client transmits the data segment. It takes time to get to the server. When the server receives it, it may send back an acknowledgement segment. So it sends back a short act. And when we, while we're waiting for the act the client can send a second data segment and a third and a fourth in this example. So in this example the client was allowed to send four data segments and then after sending four, it's not allowed to send anymore and Tilla receives an act for the first one that it's already sent and the act indicates that the server has successfully received and that's an indication that we can send more data. The server is ready to receive more and then we send more and then we have to wait for acts again and this can continue. So this is a general illustration of the sliding window mechanism where the windowing where the client can send a window of segments more than just one. How many can it send? Well that's how TCP flow control controls how many it's allowed to send. It depends upon the receiver buffer space at the server. So how many segments or more precisely how many bytes can the client send before having to wait for the next act? That's the question. Well the server will tell the client how many bytes it's allowed to send by including in the act in the window field the number of bytes it's allowed to send. And if we've sent that many then we're not allowed to send any more. So for example let's say in this case we're allowed to send four segments. We send the first one so we're allowed to send three more. We send the second one we're allowed to send two more, one more. After we sent that fourth segment we're not allowed to send any more until the server says we're allowed to send more. So the server sends an act back saying now you're allowed to send one more segment. So inside the act there will be a value that indicates how much it's allowed to send. We send one more segment and just after sending that we receive a second act back saying you can send one more segment. And we transmit that segment and then we receive the third act back acknowledgement for the third segment sent and we send another and so on. So that's hopefully a reminder of some of the basics of a sliding window mechanism. Any questions or any issues? I know most of you have covered sliding windows in my previous course. We want to see how TCP implements this and how it impacts on performance. So the amount that the client's allowed to send before it receives an act is determined by the buffer space at the server. If we draw that the server has an amount of buffer allocated for the TCP connection. That is the server has some memory in the operating system allocates some memory for the data that receives. As the server receives data segments it puts them in the buffer and they may stay there until the application at the server takes the data out, processes it. So flow control is about making sure that we don't overflow this buffer. So what we do is when the server sends back an act the TCP header the window field indicates how much space is available in the buffer. So if we start for example with a buffer of 10,000 bytes and currently we're storing 3,000 bytes of data is stored in the buffer then when the server sends back an act in the window field in that TCP header it will send back a TCP act and the window field will equal 7,000 saying I've got 7,000 bytes of space available in my buffer you're allowed to send me up to 7,000 bytes of more data. If you send me more than 7,000 you'll fill up my buffer buffer and that's a problem. So the window field restricts how much the source is allowed to send and then the client when they receive that uses that value and based upon how much they've already sent they can determine how many more data segments they can send. Note that TCP we don't count the data segments we count bytes so this is indication you're allowed to send 7,000 bytes whether you send that in one segment or in two segments or ten segments it doesn't matter as long as the amount of data is no more than 7,000 bytes. So that's the basics of TCP flow control and of course these values will change over time. If we send more data and this 3,000 bytes is still in the buffer and let's say we send another 1,000 bytes so it increases up to the amount of buffer in use increases up to 4,000 then the act that comes back the next act would have a window of what? 6,000 So it was 3,000 in use with 6,000 spare if we receive another 1,000 bytes of data sometime later we've got 4,000 bytes in use 6,000 bytes spare so we send back an act saying my window size is 6,000 bytes meaning the client is not allowed to send more than 6,000 bytes how does this number go down? Well this is the buffer that the operating system maintains for TCP for data received there's an application on this server that consumes this data so we're sending data TCP stores it in a buffer until the application takes it out of the buffer and uses it so the application depending upon what it's doing the speed of the computer may eventually take data out of the buffer making more space available for example the application takes 3,000 bytes out of the buffer that data has been successfully delivered so now TCP has 1,000 bytes stored in the buffer the next act that goes back will be 9,000 the server will tell the client I now have space for another 9,000 bytes send me up to 9,000 the buffer TCP allocates a buffer for each connection so we'll see some values shortly so when you create a TCP connection an application creates a connection it allocates let's say 60,000 bytes for buffer for the receiver if you have multiple connections running you have a separate buffer it's independent so the server, let's say it's a web server and hundreds of people are connecting to it at the same time then there are hundreds of TCP instances running in parallel all independent of each other and all having their own buffer so it's per connection any questions on how the basics of TCP flow control work now it's hard to predict when this value goes up and down because when does the application take data out of the buffer? it depends upon the application it depends upon the speed of the computer how long it takes to process so we do not know how fast in practice that happens but just in TCP the basic procedure is that as we receive data we put it in the buffer and the window size that goes back is of course going to be smaller because we've got less space in the buffer as data is taken out of the buffer by the application we can advertise a larger window because we've got more space in the buffer and what's happening is that the server is advertising the amount of space available in its buffer amount of memory and hence this is called the advertised window the server advertises it tells, informs the client how much space it has available so this is often referred to as the advertised window and that limits how much the client is allowed to send this diagram is just a simplification of the previous one I've just removed the lines or the arrows for all of the frames except the first and I've simplified the acts usually the acknowledgments are small and take a small amount of time now we know the basics of TCP flow control we're interested in understanding how it impacts on performance when I'm transferring files or a large amount of data from my client to server how fast can I send that? well the TCP flow control mechanism impacts upon the speed because how much data can the client send? it depends upon the advertised window the advertised window this value say of 7000 limits the amount of data that the client can send before it has to wait for another act in this case it can send 7000 bytes after it sends 7000 bytes it must wait until it receives more information from the server so if we have to wait a long time then our throughput will go down because we send 7000 and then wait a long time to look and analyse how does this impact on our throughput or our performance let's look at it in a general sense and then go through a detailed example in a simple case the performance primarily depends upon how fast we can send data and the time to get an acknowledgement back so in this diagram although we don't have numbers let's say the client is allowed to send this amount of data so this indicates the transmission time 4 segments here let's say each segment was 1000 bytes that's 4000 bytes it's allowed to send so initially it sends it's 4000 bytes it will not be able to send anymore until it receives some acknowledgement from the server saying my advertised window has increased let's put some numbers to this you can draw on that diagram the client and the server the advertised window needs some initial value let's say it's initially 4000 to get started which means the client is allowed to send 4000 bytes and to keep things simple let's say it sends 1000 bytes per segment so in this diagram you see there are 4 segments it transmits the first segment and that takes time to get across to the server and how much more is allowed to send? it's allowed to send 3000 the limit was 4000 we've just sent 1000 so we're allowed to send another 3000 so let's send it in 3 more segments so the client transmits these 3 segments out onto the network after sending those this is one each of 1000 bytes in length after sending those 4000 bytes we're not allowed to send anymore because the window now is down to 0 I could initially send 4000 I've sent 4000 I'm allowed to send 0 more bytes these segments take some time to get to the server so it takes some time to get there I'll try and follow this diagram when the first one arrives at the server let's assume the server processes it immediately and sends back an AC I'm going to come back later sends back an AC so what have we got? the buffer here it's hard to draw but initially the 1000 goes into the buffer and then assuming the application takes that 1000 immediately out we have that extra 1000 bytes of space available so we send back an AC saying you can send 1000 more bytes and the same will happen for each of these segments and an AC will come back and from the client's perspective here we're not allowed to send anymore we've reached our limit we must wait until we get an AC saying we're allowed to send more and in general the AC will allow us to send more data and keep going until we get the diagram like we have on the screen we can send 4 more segments for example and then wait and send 4 more this is the example in this example the window is fixed at 4000 bytes which effectively means we can send 4000 bytes wait for the acknowledgement for the first of those segments which will allow us to send another 1000 bytes when we receive the first one we can send 1000 more we send that and then we receive the next AC allowing us to send another 1000 and the next and effectively we can send 4 more segments of our 1000 bytes then we'll have to wait we send 4 more and so on and if we've got a continuous amount of data that pattern will continue what's our throughput or how would you measure the throughput in this case what does it depend upon how fast is data getting delivered to the server depends upon the data size in the diagram look at how fast how often is data getting to the server one round in one round trip time which is here and then to get back we're receiving 4 segments or stepping back we're sending 4 segments every one round trip time and because nothing is lost if we send 4 segments the server will receive 4 segments so we send the client sends 4 segments then the time to get the data there and act back our round trip time then we can send another 4 segments and in that one round trip time so effectively we get to send 4 segments every one round trip time a round trip time is the time to get one segment there to the server and to get an act back and if we keep drawing this diagram it'll be the same 4 segments every one round trip time and that's also the rate at which we receive because nothing is being lost if we send at that rate so we receive 4 segments every one round trip time and that gives us our throughput our throughput depends upon how many segments we can receive per round trip time in this case our window was a maximum size of 4000 I just made that up for this example 4 segments for example if that window was larger we'd get 8000 we'd get this with 4000 or 4 segments we can send 4 and then we have to wait because there's a long round trip time but if we had 5000 for example we could send 1, 2, 3, 4 and then another one here we'd have a smaller amount of time waiting if we have 6000 then even a smaller amount of time waiting 7000 8000, 9000 that was the window size then at some point we'll get to we're always transmitting we don't have to wait for an act because we're still sending part of the window and here's an example where we have what it's 8 segments 8000 we can send the first one we're allowed to send another 7 2nd, 3rd, 4th, 5th, 6th, 7th, 8th but while we're sending the 7th we're allowed to send one more but we receive an act for the first allowing us to send another one which means we can now send number 8 plus another one so we send 7, 8 and if you look when the act for the 2nd segment comes back allow us to send another segment so if we keep going we'll always be able to send segments we won't have to wait for an act there'll be no waiting time in this case because the window size is large enough relative to the round trip time so let's just make note of some things that we know we have an advertised window that is the limit as to how much the the source is allowed to send the maximum advertised window is the maximum size of the buffer at the receiver in this example on the board I said the buffer was 10,000 bytes the maximum advertised window is 10,000 bytes in that case so we have an advertised window sometimes abbreviated as A, W, N, D we will see later there are different types of windows a congestion control window we know round trip time, RTT the time to get a segment to the destination and of these two diagrams which one is better in terms of throughput this one with four segments or this one with eight segments hands up for four hands up for the eight so with four segments I send four segments and this time I'm not sending anything I've got a network I'm not using it to transmit data that's inefficient for round trip time that's worse than sending eight segments or slightly less than eight segments per round trip time in this case I don't spend any time waiting I spend all the time transmitting more efficient let's put some try and derive an equation that relates all these factors together so what is the round trip time and when we start transmitting the first segment the first data segment until we get an act back that's our round trip time shown on the diagram here's one round trip time and the advertised window is the amount that we're allowed to send in my example I said it was 4,000 bytes but in general let's say it's AWND the advertised window that's how much I'm allowed to send per round trip time and then there we have a rate advertised window is measured in bytes round trip time bytes or bits round trip time in seconds is a unit time and how fast can we deliver data across the network is our the rate at which we can send if we simply have a link it's the transmission rate of that link so we also have a rate let's say in bytes per second or bits per second with some prefix so a measure of performance is how much time do we spend transmitting our data per round trip time derive an equation on the board so just giving some notation we're going to use in some equations we have our round trip time which is how much we're allowed to send and some rate the speed at which we can send data for example if I have 100 megabits per second link between my two computers the rate is 100 megabits per second and the advertised window depends upon the buffer size at the server, at the receiver the maximum value for example here maybe 10,000 bytes most of you said that this case is better in this simple example sorry this case is worse than this one that is sending 4 segments per round trip time is less efficient gives us less a lower throughput than sending 8 segments or having window size of 8 which allows us to send continuous stream of segments the question that becomes important when we look at the performance of TCP is how large should the window be to give optimal performance what if so in this case the window is 4 segments or 4,000 bytes whatever units you want to use what if the window was just one segment or 1,000 bytes would we get optimal performance I will not draw it but if the window was 1,000 bytes using our similar example is the performance going to be higher or lower than what is on the screen try again hands up for higher that is if the window is 1,000 bytes the one on the screen the window is 4,000 bytes if under the same conditions but the window was 1,000 bytes do you think the throughput or efficiency will be higher than 4,000 anyone higher if we use 1,000 we will get higher performance what about lower performance ok as you can see just remove these three frames if we just have one then we send one, we wait a long time send another one wait a long time send another one that is less efficient than sending 4 so window of 1,000 low or the lowest 4,000 is better 1,000 in this example 5,000 better again because we spend less time not sending and if you could imagine 6,000 or 6 segments the diagram here is 8,000 and in this case we are sending all the time no time waiting in this case we have got the best scenario there is no time spent waiting we are always transmitting ignoring other overheads we get 100% efficiency what if we set the window is 10,000 performance will be higher lower or the same as the screen the screen is for 8,000 if we set it to 10,000 we have three options now higher, lower or same 10,000 higher higher throughput higher efficiency lower look at the trend we have here with the low values we get lowest performance just 1,000 not good 4,000 was a bit better 8,000 even better but what about 10,000 compared to 8,000 here is 8,000 on the screen what if we have got 10,000 it will be the same in this case in the specifics example we are always transmitting the round trip time is such that we receive the act for the first frame while we are still sending that first set of 8 segments 8,000 by when I say this window is 1, 4, 8 or 10,000 let's say that the buffer size is 1, 4, 8 or 10,000 this value here comes from the size of the buffer so another way to think of it what if the server had a buffer of 1,000 what if it had a buffer of 4,000 a buffer 8,000 or 10,000 if it was 1,000 if the buffer size of the server was 1,000 the source would be allowed to send 1,000 bytes at a time that's a window of 1,000 if it was 4,000 we would be allowed to send 4,000 bytes per round trip time 8,000 is better than 4 as we can see on the two diagrams 10,000 is no better than 8,000 because with 8,000 we've already reached the optimal where we're always sending we cannot send any faster it's about not spending any time waiting 100,000 8,000, 10,000 100,000 all the same so increasing above some point does not give us any better performance what point at what point do we get this optimal performance that's what we want to know what should the value be here such that the sender is always sending find an equation for that so let's summarize again we have optimal performance when the sender is always sending no time waiting like in the example on the board screen that's optimal under what conditions do we get that under what conditions is the sender always sending and consider our parameters advertise window round trip time and rate and using those three parameters and what we show here the advertise window divided by the rate is the time spent sending data the round trip time is the total time so the optimal performance is when that advertise window divided by the rate that is the time spent sending data is greater than or equal to the round trip time when this part is greater than or equal to this part because if you can imagine the larger the advertise window divided by the rate was larger we'd spend no time waiting all time transmitting if it's larger than the round trip time if it's less than then we will not get the optimal performance if it's greater than we'll get the optimal performance and that's an important part in understanding TCP performance any questions on that so this this goal is we would like to have the advertise window divided by the rate the rate at which we can send data greater than the round trip time of our network if that's the case using the flow control mechanism will always be sending data and that gives us our optimal performance if we ignore things like errors but if if advertise window divided by rate is less than the round trip time like in this diagram you can see this arrow is less than this then we spend some time waiting which is less than optimal performance suboptimal now when we are using TCP in the internet between our client and server what can we control between the client and server and we can assess of these three parameters which one can I control which one can I change can I change the round trip time not very easily the round trip time between my computer and the facebook web server depends on many factors outside of the control of my computer and the server it depends upon the physical distance the links between us and the routers between us we can control the round trip time so we cannot change that it's a given, it's a characteristic of the network it's hard to predict even it may vary when I connect to the facebook web server it may be 200 milliseconds when I connect to the local SIT server it may be 1 millisecond but it's not something we can control what about the rate the rate is in fact the rate at which we deliver data between client and server let's give you an example client server connect them via a 100 megabit per second ethernet link direct later I'll show you, I'll connect my two laptops via a LAN cable and using 100 megabit per second that's the characteristic of my LAN card the speed at which you can send then we'd say the rate of this network is 100 megabits per second that's the rate at which I can send data what about a more complex scenario though let's see if I connect two computers directly together what if I connect to a router via 100 megabit per second LAN and that router connects to a server via a 10 megabit per second LAN cable an old LAN cable only supports 10 megabits per second what is the rate at which I can transfer data from client to server I can send to the router at 100 megabits per second the router can send to the server at 10 megabits per second what's the average rate at which I can send data from client to server calculate it anyone can calculate the answer 10 10 megabits per second we're limited by the lowest of the rates of the links in the path even though I can send at 100 to the router the router can only deliver that at 10 megabits per second to the server so data will be queuing up at the router and being delayed and sent so therefore on average from client to server the rate will be 10 megabits per second it's the minimum of all the rates in the path we say this is the bottleneck the bottleneck link in our path here's our path from client to server the path rate is effectively the minimum of the link rates in that path or the bottleneck link more complex another path with rates of 5 10 2 and 7 these are the link rates what's the path rate it's 2, it's the minimum of those 4 send at 5 to here we're limited by this and that's what's used here in this rate equation now my client here Facebook web server in the US these are the link rates can I control them no I have no control over what they are so again that's a characteristic of the network the path between client and server is what it is and the rate I cannot control advertised window that comes back to the buffer size at the server the buffer size the advertised window is how much space is in the buffer if the buffer is 10,000 bytes the advertised window can go up to 10,000 bytes you'll never go larger so the buffer size at the server was 100,000 bytes the advertised window goes up to 100,000 bytes so the advertised window is related to the buffer size at the server the same as the buffer size at the server so let's come back to our equation we said to get optimal performance we need this equation to be true advertised window divided by rate is greater than or equal to the round-trip time so if you're using TCP and you want to get optimal performance set the advertised window such that with the given rate and the given round-trip time this equation holds for a particular network or particular path round-trip time may be known and the rate may be known so if I know the round-trip time and the rate make sure the advertised window is a value such that this equation holds or in other words make sure the buffer size at the server is large enough let's rearrange this equation bring rate to the other side the advertised window needs to be greater than or equal to to get optimal performance greater than or equal to the round-trip time times by the rate or rate times round-trip time so given a particular path in the internet which has some round-trip time and has some rate if I bring rate to the other side here multiply then for optimal performance the advertised window needs to be greater than or equal to the rate multiplied by the round-trip time for example 20 seconds from my client to server I measure it and the rate is 6 megabits per second how big should my buffer be at the server if I want to get optimal performance so if we know the round-trip time and we know the rate of our path what should the buffer size be at the server to get this optimal performance simpler what is the advertised window what does it need to be multiply the two values together rate multiply by round-trip time 60 and this is 10 to the power of 6 mega this is 10 to the power of minus 3 so 60,000 bits megabits per second multiply by milliseconds we get 60,000 bits so if my advertised window is greater than or equal to 60,000 bits we'll get optimal performance optimal throughput we'll get this case where we're always sending so if my advertised window is 70,000 bits that's good 100,000 bits that's good same performance 50,000 bits not good or at least not optimal performance what is the advertised window well the maximum size is the amount of space in the buffer so another thing to say is if my buffer at the server is greater than 60,000 bits I can achieve optimal throughput with TCP flow control if my buffer at the server is less than 60,000 bits I will not achieve optimal throughput with TCP flow control so now we've got back to how big should the buffer be to get optimal throughput because the advertised window depends upon the buffer any questions before we try and illustrate this with an example so we're trying to work through that given a particular path between client and server so from my computer to the facebook web server some path so if we know that the rate of that path is 6 megabits per second and the time to get there and back in seconds that's a characteristic of the network or that path that we're using then for TCP flow control to make sure the source can always send data and spend no time waiting the server must have a buffer space for that TCP connection greater than 60,000 bits greater than or equal to it if it's 60,000 bits that's sufficient if it's 70,000 bits that's okay as well if it's 50,000 bits that's a good throughput because with the TCP flow control we'll spend some time sending and then we'll have to wait before we receive an act before we can send more which gives us suboptimal throughput if the rate decreases well if so it's about the efficiency not the absolute throughput which is the look at this it's about the ratio between this line advertised window divided by rate this vertical line and the round-trip time which is a measure of efficiency what percentage of the time are we sending? so yes if the rate is smaller if it was 3 megabits per second we would only need a buffer size of 30,000 bits a smaller buffer no we may get a lower throughput absolute throughput but we're talking about as a percentage of that throughput so you're right if the rate is 6 megabits here our throughput so that's the sending rate the throughput is always going to be less than or equal to the sending rate so 6 megabits per second our throughput can never be larger than 6 megabits per second the throughput is the rate at which we deliver what we're saying is that if our sending rate or our path rate is 6 megabits per second and the round-trip time is 10 milliseconds and the buffer size at the server is greater than 60,000 bits it means our throughput can reach 6 megabits per second that is optimal performance we can be sending all the time no time waiting if a different example if we had our client and server and like you said a smaller rate 3 megabits per second same round-trip time 10 milliseconds and if we have the same advertised window or we could calculate the advertised window or the server buffer space required would be 30,000 bits so if our advertised window is greater than equal to 30,000 bits then throughput will be our optimal 3 megabits per second but think of throughput as a percent now think of efficiency throughput divided by rate efficiency in this case is 100% and here also 100% that is the highest it can go yes, the absolute value of throughput is less but in terms of what we've got, what we have is 3 megabits per second we want TCP to use all of that if we know we can go no larger than 3 megabits per second we want to use all of that and we're trying to work out what value of advertised window or what value of buffer space is needed to reach this 100% efficiency if we were lower let's say let's say we had in this case an advertised window of 30,000 bits we know what the optimal value should be but let's say we don't have the optimal value, the window is 30,000 bits or the buffer space is 30,000 bits our equation says if the window is greater than equal to the rate times the round-trip time we can reach the optimal throughput it's not here which means we will not get 100% efficiency, will be less than 100% so throughput let's say will be 3 megabits per second whereas in this case with the same window 3 megabits per second rate our throughput is the same but importantly our efficiency is higher so focus on the efficiency which is throughput divided by the rate yes that's right when you have a lower data rate you don't need a you can be okay with a smaller buffer a smaller window yes and similar if you have a lower round-trip time it means your act gets back faster and you don't have to send as much data so yes if the rate is smaller or the round-trip time is smaller then you don't need such a large window or such a large buffer of the receiver to get the optimal performance you're right that in the network there's some relationship between round-trip time and rate in practice but let's assume that they are parameters of the path that we're using okay let's say the path between client and server maybe 6 megabits per second round-trip time of 10 milliseconds another path this is from me to Facebook from me to Google server maybe 6 megabits per second and 20 milliseconds then given that I know what the buffer size should be to get the optimal that's all we're saying but given these values we can work out what the buffer size should be that leads us to something else what if we don't have the optimal the window is less than the round-trip time times the rate so we're saying is what we're saying is if the advertised window is greater than or equal to the rate times the round-trip time we get 100% efficiency we cannot go any better what if the advertised window is less than the rate times the round-trip time like in the slide that's the picture on the screen what's the efficiency in this case how do you calculate the efficiency in the one on the screen the efficiency is the percentage of time that we spend sending data what percentage of time how do you write an equation to calculate efficiency in this case as a fraction or as a percentage look at the two vertical lines our efficiency is a ratio between this value and the round-trip time the larger this is the more efficient we are relative to the round-trip time so this divided by round-trip time advertised window divided by rate all divided by round-trip time as a fraction not as a percentage impact say if the top part is greater than the bottom part then we get efficiency of 1 or 100% and we cannot go above 1 or 100% so if this is larger than this of course this fraction is greater than 1 and as a percentage 100% or higher and of course we're limited at 100% efficient because we cannot send faster than the rate if the advertised window divided by the rate is less than the round-trip time then this would be less than 1 we're less than 100% efficient okay let's finish this so how do we define throughput well let's write it as another way to write efficiency throughput divided by I paid for a 6 megabit per second link internet ADSL link my throughput of that link is 4 megabits per second because there's overheads then I would say my efficiency is 4 divided by 6 so another way to think of efficiency is the throughput divided by the rate the rate is how fast we can send the throughput is the rate at which we send the real data we can now calculate throughput these two equations what do we get can anyone solve so first we said efficiency is the ratio between this advertised window divided by the rate and the round-trip time as per the diagram here the larger this is compared to round-trip time the more efficient we are another form of efficiency or the way we describe efficiency is if we have some rate and we have some throughput then the ratio of throughput to rate is a measure of efficiency of that link so equate these two together are the same calculate throughput what do we get the rates will cancel out yeah, you're right see if everyone knows so if you cancel out the rates in those two equations we get window size divided by the round-trip time equate this with this bring rate to the other side and you get throughput equals all of this multiplied by the rate the two rates will cancel out and we'll get left advertised window divided by round-trip time so the throughput is the advertised window divided by the round-trip time and we know optimal throughput optimal efficiency will be achieved when advertised window is greater than the rate times the round-trip time greater than or equal to it when it's less then we'll get as calculated as shown in this equation, advertised window divided by round-trip time coming back to one of the examples that we have on the board here if the advertised window is 30,000 bits in our network here what's the throughput I made up numbers here my network has a rate of 6 megabits per second my network the round-trip time of 10 milliseconds my server has a buffer size of 30,000 bits what throughput am I going to get with TCP flow control we have our are we going to get first simpler question, are we going to get optimal throughput optimal throughput is when the advertised window is greater than the rate times RTT rate times RTT is 60,000 bits it's not larger, we're not going to reach optimal what are we going to get 30,000 bits divided by 10 milliseconds 3 megabits per second in fact I did calculate it before or 50% efficiency so let's summarize this this equation and in fact the multiplication of rate and round-trip time determines how big our buffer size or our advertised window should be if we're larger than this we will get 100% efficiency or maximum throughput if we're less of course we'll get lower and together this is called the bandwidth delay product round-trip time, think of delay rate is sometimes also called the data rate or bandwidth the speed at which we send out data so people refer to this as the bandwidth delay product product multiplication BDP the bandwidth the rate multiplied by the delay the round-trip time product of the two and it's a common thing if you know the bandwidth delay product of your path or of your link then from that you know the bandwidth delay product is 60,000 bits then I know my buffer at the receiver needs to be at least 60,000 bits if I want to get maximum throughput so it's commonly used when we're looking at the performance of TCP flow control these equations and a description of this is in the handout it's slightly different but I think it leads us to the same same equation this advertised window needs to be greater than or equal to the rate times the RTT which is the bandwidth delay product so now in the internet when we have a client and server communicating well, there are many different values of what the round-trip time and the rate may be some examples and before we finish let's see if I have some values this is a handout which will go through tomorrow it's just a print out from my web site we'll go through this demo tomorrow but on one of the pages there's a there should be enough on one of the pages there's some example values let's just finish on them the handouts a detailed description of the example part of the example will go through tomorrow towards the bottom there are some there they are some example values for some very simplistic links I have a fast ethernet link fast ethernet refers to the 100 megabit per second LAN ethernet standard for example, I connect my two laptops together and I'll do that tomorrow I connect the two laptops together it's too early in the week to be getting so tired 100 megabit per second is my rate okay what is the delay or roundtrip time between two laptops well you can measure it let's say it's 1 millisecond to get there and back in fact usually less if that's the case 100 megabits per second multiplied by 1 millisecond gives us a bandwidth delay product of 12,500 bytes that tells me my TCP across that link and with respect to flow control if I want to get optimal performance the highest throughput possible then my buffer at the receiver ignore this value that's something else my buffer at the receiver needs to be greater than or equal to 12,500 bytes okay so if I have such a link and I want to get optimal performance so I know how big my TCP received buffer should be 12,500 bytes if let's say I'm using 100 megabit per second LAN not direct connection but there's some delays in the switches so the delay is now 10 milliseconds then the bandwidth delay product is 10 times larger 125,000 bytes so in that network I need a buffer at the receiver to be 125,000 bytes and another value if I use gigabit ethernet 1 gigabit per second bandwidth what about your home ADSL you're connecting to a server by your 6 megabit per second or actually downloading from a server so you're the server in fact downloading data the bandwidth is 6 megabits per second and let's say the delay the round trip time is 50 milliseconds then you can calculate the bandwidth delay product TCP buffer size needed at the receiver should be at least 37,500 bytes if it's less you will not get optimal throughput you will not get close to 6 megabit per second throughput if it's greater than or equal then you can achieve optimal throughput so just some quick example values what we'll do tomorrow is go through most of that example on that website I'll step through on the two computers and demonstrate some different values and a few other things which may be useful for you for the last phase of the assignment let's stop there