 Last week we introduced the concept of flow control and went through how stop and wait flow control works. Today we're going to finish on the stop and wait flow control protocol, recap on that calculation of efficiency that we've finished with and look at an alternative flow control protocol, which is much harder. So we need to proceed slowly and everyone needs to concentrate because it gets harder and harder as we go through today's lecture. But let's just recap on what we know. Flow control, remember, we want to slow down, well we want to control the speed at which the sender sends such that the sender doesn't overflow the receiver, doesn't send too fast for the receiver. In particular, the receiver has a amount of memory or buffer space available to receive and store the received data. It's limited, we cannot send too fast such that we overflow that memory. If we do, then data will be lost, which is bad for our performance. So flow control is about making sure that we don't overflow the receiver. The first simple approach we saw was stop and wait flow control where we send one data frame at a time, we send a data frame, the receiver receives it, puts it in memory, processes it, it's stored in memory while it's processing and once it's finished with that piece of data, it removes from memory and sends back an acknowledgment to A saying, I'm now ready for more. Okay, you can send me one more. So when A sends the first data frame after sending it, stop, stop and waits, waits for the act to come back. Even if there's more data to send, we cannot send it until we receive the act back. So data act, data act and so on. Some issues, some questions the student asked is, when does the data arrive? I've drawn three arrows here indicating at this time there's some data to send and at this time and at this time. In this example I just chose those values randomly or not randomly, but to create a nice example. When does data arrive, when does my computer, in particular my data link layer of my laptop have data to send to the other device? Well it depends upon what my computer is doing. It depends upon the applications, the user, the human user, what software is running on that computer, how fast my computer is. So we cannot predict when data arrives. So this is data arriving not across the network but from the application on my computer ready to be sent across the network. This is across the network, this is internal to my source computer. Well at what time points does data arrive? We cannot predict and sometimes we just assume it's random. It arrives at different times. Of course, so in this case data arrived, we sent that in one data frame. While we're waiting for the act, some more data arrived. We were not allowed to send it because the stop and wait flow control protocol has the rule we cannot send the next piece of data until we've received an act for the previous piece of data. Once we receive the act, then we send the second piece of data, we get an act back. We don't have any more data to send at this point. So we've received an act, we're allowed to send but if we have no data to send, of course we do nothing until we have some data to send and then we transmit that and it's not shown here but it continues. The other thing that students query about is well in this example I've drawn some space here in this case there's no space, why is that? Well again this is just for this example I chose a case to indicate in this first case B receives the data frame, stores in memory and let's say the computer is busy so it takes a long time to process that data. The data is waiting in the buffer, it's processing, we've finished processing, the data is removed from the buffer and we send back an act now saying we're ready for more. In the second data frame I assume B receives the data and B immediately processes that data and it's finished with data frame 2 and sends back an act immediately. So again, how long does it take B to process the data? How long between when it receives the data until when it sends back an act? We cannot predict, it depends upon the speed of the computer, what's happening at the computer and so on. So we cannot predict the, well at least in this course we do not attempt to predict the arrival times or how long it takes to process as you see in some questions I may give you that information in the question like let's assume every frame takes one microsecond to process or take zero to process. Let's assume all our data arrived at the start. The stop and wait flow control protocol just follows the rules of data, act, data, act and so on. And we did a calculation, an example and we arrived at the answer. Who has the answer? Anyone remember the answer from last week? What is the throughput of value? 9, 9, 5, 9, 1, 1, 8 bits per second. How did we arrive at that? And I think you did it, we did most of this last week so you probably have this already. I've just put it into a nicer picture than what we had last week showing the three frames and we see this common sequence of data, act, data, act, data, act. We had three frames in this question and I've drawn the timing, the clock value on this diagram. If we assume we start at time zero, it takes some time to transmit the data frame, this rectangle which was 8,160 microseconds, we calculated that. It takes some time to propagate to B which was 10 microseconds, therefore the frame has fully arrived at 8,170. It takes one microsecond to process, brings us to 8,171. 160 to transmit the act back, again we calculated 160 last week and 10 for propagation, brings us to 8,341. So for the exchange of one data frame and to get the act back, it takes 8,341 microseconds. In this example we had three chunks of data to send, immediately we send the second one and we see the same pattern and in fact the same timing and same with the third one and we arrived at the final time of 25023 and you see if you cannot immediately see it, if you subtract this number from this one we see the difference here is 8,341, 8,341, 8,341 because in this specific example the timing is the same for each exchange because nothing changes in terms of propagation or transmission delay. Anyone have trouble creating such a diagram that is determining the timing? I think most people got there last week or got part of the way. It's just adding up the delay of each step, transmission, propagation, processing, transmission, propagation. But the question asked us, what is the throughput? This gives us the time to deliver the three frames. What is the throughput? Well let's look at it. First let's look at what is the sending rate. Concentrate on the left side for A. How often does A send a data frame? Well we can see this pattern. Every 8,341 microseconds there's one data frame being sent. That is we send one then we stop and wait and we send the next one, stop and wait, next data frame and so on. This period is the same in each case, 8,341 microseconds. So we're sending one data frame every 8,341 microseconds. Note there's a mistake on this picture. I wrote MS here. It should be U or mu microseconds, not milliseconds. Now in our case we're sending one frame every 8,341 microseconds and because the frames are not lost across the network, there's no errors in this case. It's always delivered. That means that B is receiving a data frame every 8,341 microseconds because we receive whatever we send. And in fact you could calculate that and see from when B receives the first part of the first frame until when it receives the first part of the second frame, the third frame and if we had subsequent frames, this time is also 8,341. You can calculate it but the one way to see is that if we send at this rate, we must receive at that rate because we receive everything that we send. So B receives one data frame every 8,341 microseconds. So now we have a receiving rate and the throughput is the rate at which we receive the real data, the useful data. We're receiving one data frame every 8,341. One data frame contains a thousand bytes of real data. If you go back to the question, we had a thousand bytes of data plus 20 bytes of header. The throughput we are interested in only considering the real data, not the header. So we can say we're receiving a thousand bytes every 8,341 microseconds. A thousand bytes divided by 8,341 microseconds is this throughput of 959,000 bits per second. So that's how we can arrive at the throughput. It makes an assumption and it commonly holds in practice. Throughput in practice, in fact, we don't really care about the throughput of just three frames. In practice when we measure the throughput in a real network, we care about across a long period of time, across many seconds, minutes, days, and even longer. What is the throughput of your wireless LAN over a long period of time, for example? Here we've measured it just across three frames. But if you imagine there are more than three frames, a thousand frames, and you keep drawing the diagram, you'll see the same pattern. You'll see one data frame sent every 8,341, one data frame received every 8,341, and we'd get the same throughput of 959,000 bits per second. Anyone have any questions on how to derive or calculate a throughput in this case? In this system, because no data is lost, you can easily just calculate what's the sending rate and the receiving rate is going to be the same. If someone gives me a thousand baht every one minute, they pass me one thousand baht note every one minute, that means I'm receiving one thousand baht every one minute. Unless I drop the note, which I'm not going to, the receiving rate is the same as the sending rate. That's what we have here. The only other thing to be wary of is that a data frame for the throughput, we care about the real data, not the header. Don't count the header when calculating the throughput. That's overhead. Any questions on stop and wait? That's the first protocol we've really been through and one of the simpler protocols. Let's change some parameters and see how it impacts on performance. In our question, we had a link distance of 2 km. Change that to 200 km. What is the throughput? Try and calculate. That is, everything's the same because you get the same shape diagram, just the values will be different. The link distance is increased from 2 km up to 200 km, 100 times larger. Either calculate the throughput or if you think that's too hard, at least think what's going to happen with the throughput. If we increase the link distance, is it going to go up? Is it going to stay the same or is it going to go down? How does the link distance impact upon throughput? You can calculate that quickly. I'll make some notes of our values. You want to have an answer? This is from the previous example. We'll return to them and a hint. You don't need to draw the entire diagram. In fact, you don't need to draw the diagram to calculate. You see the pattern that a single phase just repeats. If you can just calculate the timing for the first phase, you have this time and you can determine the throughput. The only thing that's changed is the link distance has gone up from 2 km to 200 km. What happens with our throughput? What's the transmission time of our data frame? It's the same. If just the link distance changes, the transmission time is going to be the same. It's still 8160. Nothing changes there. What's the propagation time for the link? Again, increase to what value? One millisecond. I haven't given the units here that data was in bytes, transmission in microseconds. The blue is in the first case, first example. We had a propagation time of 10 microseconds. We increased the link distance by a factor of 100. From 2 to 200. It's 100 times larger. Our signal needs to propagate 100 times further, so it's going to take 100 times longer. The propagation delay will be 100 times larger than the original of 10. It'll be 1,000. Everything else will stay the same. That is 8,960 plus 1000, which is what? 9160 plus 1, 9161. 9161 plus 160 is 9321. 9321 plus another 1000 is 10,321. It takes about 10,321 microseconds from here to here if we increase the link distance. If you keep going, you'll get the same. Another 10,321 and so on here. The throughput will be 10,321, well, you would see we're sending one data frame every 10,321 microseconds. We're receiving one data frame at the same rate. Therefore, we're receiving the same 1,000 bytes of data every 10,321 microseconds. Someone can do the calculation, 1,000 bytes divided by that time. What's the throughput? Does anyone have a number for the throughput? I'm sure someone does. I see several calculators. What's 8,000 divided by 10,321? 775118. Sounds right. I think I know space. 775,000 bits per second, which was calculated by taking the data size, 1,000 bytes, times by 8 to get to bits divided by the total time, but this was in microseconds. Something like about 775,000 bits per second, the throughput in that case, or 0.775 megabits per second. In our first case, so with two cloning, our throughput was 0.959 megabits per second. In the second case, with this longer link distance, we're down to 0.775 megabits per second or approximate. In both cases, the data rate is the same. I have a link with a maximum capacity, a maximum speed of one megabit per second, but using this protocol, the rate at which I deliver the real data is, in the first case, 95.9% of the data rate. We can call that the efficiency in using the link. I've got a data rate, that's my upper limit, one megabit per second. I get a throughput of 0.959 megabits per second, so we say we're 95.9% efficient. In the second case, we have the same data rate, one megabit per second. A throughput we calculated of 0.775 megabits per second, we'd say we're 77.5% efficient in the second case. Which one's better? First or second? First. Better efficiency. Think of it from the perspective of your company pays for a link. You pay 10,000 baht per month for a link. You're paying some ISP or some telecom company. You transfer your data across that link. In the first case, you're using 95.9% of the time to deliver real data. In the second case, you're using only 77% of the time to deliver real data. We're less efficient. You're using your money, your resources in a less efficient manner. We want efficiency to be as high as possible. Increasing the propagation delay decreased our efficiency when we use the stop and wait protocol. Why? Because with the propagation delay, we send our data and it takes a long time to get there and a long time for the act to get back. If you remember, stop and wait, we send our data, we're delivering the real data and then we stop and wait. So the larger the propagation delay, the longer the time we wait, the longer the time we spend not sending across the link, which is inefficient. So increasing the propagation delay makes our use of the link more inefficient. I've tried to capture that or these trade-offs in this diagram. So focusing on just one exchange and these are a little bit back to front. Let's ignore the first diagram. So what I've tried to do is just draw one exchange, of course we could continue, and not worry about the numbers but look at what happens when we change some of the values. Assuming the first case was this, where we transmit the data, propagate, act, propagate. In this case, the right one compared to the middle one, everything's the same except we increase the propagation delay. So you can see visually, the time between here and here, between end of data or start of act, and the right one has increased, a longer propagation delay. Therefore a longer time before we get the act back, and therefore look at this period, the time spent waiting, the time spent sending nothing, is much larger than in the middle case. The time spent sending the data is the same, but the time spent not sending anything has increased when we increase the propagation delay. Therefore the right most case is less efficient than the middle case. The efficiency is a fraction, is the fraction of time that we spend sending real data. I've tried to capture that on the diagram with the red and green lines. The red line is the time for the entire transfer, the total time. The green line is the time spent sending real data. I've drawn that as most of the data transmission. A little bit at the start is the time spent sending the header. That's the idea in this diagram. There's no actual numbers to it. But if you compare the green line to the red line in the middle and the right diagram, in the middle one it's the green one is a larger percentage of the red line. In the right one it's about half of the red line whereas in the middle one it's much more than half in length. Which means here we spend much more time transmitting out of the total time than in this case. Middle one is more efficient than the right one, the right case. Increasing the propagation delay with everything else the same reduces our efficiency. So if we have a long link and we use stop and wait then our efficiency can be quite low which is a bad thing. Now let's go backwards on this diagram. Go back to the first one. Let's assume this was our original case. Transmit data, propagate, hack, propagate. What if we changed not the link distance but the, for example, the amount of data being sent. We had in our example at 1,000 bytes of real data. What if we increased that to 2,000? Everything else was the same. So we have 2,000 bytes plus a 20 byte header plus the propagation and the hack and the propagation. So by increasing the size of the data also called the payload what we do is we increase the data transmission time. So this rectangle increases to this one in the middle. More data to send, larger transmission time. Which one's more efficient? The left one or the middle one? Which one's better? Hands up for the left one. You need to wake up to answer this question. Hands up for the left one for being more efficient. We've got two options left and middle. Ignore the right one. That's done. Hands up for the left one. Hands up for don't know. Hands up for the middle one. Putting my hand up. Anyone want to follow? Look at the, we'll explain why in a moment but one way to visually see is look at a fraction of time that we spend sending the real data. The green line versus the red line. And I think you can see visually that as a fraction of the red line in this case I would say the green line is a smaller fraction than in this case. This is about half or less than half of the red line. That is we spend less than half of the time sending the real data. Sending say the 1000 bytes. The rest of the time is waiting and transmitting header. In the middle case we spend more than half of the time sending real data. We spend a lot of the time sending real data. Which is a good thing. We'd like to spend all of our time sending real data. But the way the protocol works we cannot. We have to wait for the act. So the middle one is better, more efficient than the left one. You can try and do some calculations. I'll give you some numbers in a moment. But you can check and you could, I won't ask you to do it in the lecture but you could just take this example and change 1000 bytes to 2000 bytes. And see what answer you get. And you'll see that the efficiency goes up. Why? Sorry, wrong way. Because we spend more time transmitting, less time waiting. Which is better for our system performance. Rather than asking you to calculate all the time I have a web page which just has a simple calculator that takes all of our parameters and calculates the efficiency. You can have a look at it in your own time. But we can set the data rate for our link, the distance of two kilometers, the transmission speed. And in fact in this case we can change it in both directions. But let's keep them the same. We've got a processing delay of one microsecond. Header of 20 bytes. The data, the payload is 1000 bytes. The ACK is 20 bytes. And I calculated before and we got what we got in our first example. It calculates all these values and finds out the efficiency is 95.9%. That's our first example. Let's change some values and see what we get. Let's change our, everything the same but change our payload to be larger. Let's make it 10,000 bytes instead of 1000 and calculate. Let's hope my internet works. Come on. There's a long delay to the server. And down the bottom efficiency is now up to 99.6%. You can check that, you can try it yourself. But the two different cases were the first case, the 95.9, we had a small data size of 1000. In the second case we increased it up to 10,000. So we spend much more time transmitting real data as opposed to waiting. And we achieved a higher efficiency from 96 up to 99%. I won't calculate with the change of distance but you can also change the other parameters like the distance. Let's set it back to 1,000 and do one more. Distance of let's say 0.2 kilometres up to, let's create a plot. Link distance ranges from 200 metres up to 200 kilometres, 2,000 kilometres. Go to the bottom and it shows a plot. And instead of calculating for just one set of one value, it calculates the efficiency for the distance varying from down here to, actually it goes right down to 200 metres, 0.2 kilometres, up to 2,000 kilometres. So we can see the efficiency goes down as the distance goes up. So by varying some of the parameters in that calculator you can see the impact of those parameters on the efficiency. One more. Everything the same. Actually I think I made a mistake there. I should have changed the distance on both directions. I just changed on one. Everything back to normal but let's change the data rate to be 1 tenth in both directions. What's going to happen? Efficiency up or down? Data rate has changed from 1 megabit per second down to 0.1 megabits per second, 100 kilobits per second. Is our efficiency going to go up or down? Two options, up or down. Hands up for up. Is the efficiency going up if I reduce the data rate? Okay, down. Hands up for down. Okay, everyone's wrong. Let's try. This is one which is confusing. Let's see. Sorry, did that work? Yes, it did. Very small change. If you can remember the first calculation we had, we had 95.9%. We had 900, oh, no, it's coming. Sorry. It's just very slow. Ah, okay. In the first case we had 95.9% when we had 1 megabit per second link because where was our number? We had 959,000 bits per second out of 1 million bits per second, 95.9%. We just decreased the data rate down to 0.1. Our efficiency went from 95.9 up a little bit to 96.1. Still went up. The throughput, of course, is less, but the efficiency is more. Remember the throughput is the absolute value. Only 96,130 bits per second, but out of the capacity it's a larger percentage than the previous case. So sometimes it's best to think about efficiency not throughput. I've got a link. How much, how efficient are I in using that link? That's what we care about. Well, in this case I have a link with a speed of 100 kilobits per second. I used 96.1% of it. In the previous case with my link speed of 1 megabit per second I used 95.9%. I encourage you to try some different values on the calculator. You can quickly get some ideas of the impact of different parameters. The reason decreasing the data rate increases the efficiency is the same reason that increasing the payload increases the efficiency. We spend more time transmitting data. So in summary, taking the left one compared to the middle one, increasing the payload or decreasing the data rate increases efficiency. Keeping everything the same but then increasing the delay, the propagation delay, decreases efficiency. Large delay, bad for performance. That brings us to our last slide on stop and wait. There's an equation that you can use to approximate the efficiency, but I think you can calculate it already. So we will not go through that. It's not so important. You can calculate it manually. What's important is to know the trends, to know the trade-offs. Stop and wait is efficient when the data transmission time is much longer than the propagation or much larger compared to the propagation time. When does that occur in practice? Well, stop and wait is inefficient if we have a link usually with a very high data rate. Like optical fiber, very high data rate, we can be very inefficient with stop and wait. We saw with our example a lower data rate gave more efficiency than a higher data rate. Not more throughput but more efficiency. So if we have high data rate, stop and wait is not so good. Or if we have a link to a satellite up in space, 36,000 kilometers up in space, very long distance, very long propagation delay, stop and wait can be very inefficient, very bad for performance. So sometimes stop and wait as a flow control protocol achieves its objective of not overflowing the receiver. It's very simple, data act, data act, but it can be very inefficient in some cases. So we need an alternative and that's what we'll go through. Any questions before we move on to the next flow control protocol? Everything okay? Sure? Tomorrow we'll have a small tutorial where you can just answer a few questions a little bit about this and mainly about the next one. Let's spend the rest of the lecture introducing the next or the alternative protocol. Much more complex. This issue of how big should the frame be, we're going to come back to that later, not today. Should it be a thousand bytes, two thousand bytes, a million bytes, one byte, we'll discuss that later. Stop and wait can be inefficient in some cases. The reason is we send one data frame and then we may spend a lot of time sending nothing because we send and we wait. That's inefficient use of our link. We're not sending anything across our link. So another one called sliding window flow control allows the source to send more than one frame at a time. Stop and wait, we send one frame, wait for an act. With sliding window we can send multiple frames, one, two, three, four, five, or more, or less, and then we have to wait for an act. The idea is that we spend less time waiting for that act to come back. We spend more time transmitting data. But it's much more complex because we need to somehow keep track of how many frames are allowed to send, how many have been received. So let's go through it and explain how it works. First thing we need to introduce is this flow control protocol called sliding window. Because we're allowed to send multiple frames, we have to keep track of the ordering of those frames and which frames have been received, which frames have been sent and acknowledged. So in fact now we, for every frame we send, we include a sequence number in that frame. We include it in the header of the frame. So some value that indicates, okay, frame one, frame two, frame three, just a number to indicate the ordering of this frame. It's included in the header. It's usually of some length, so k bits in length. So we use some binary value. And with a k bit sequence number, in decimal we can store values from zero up to two to the k minus one. So what sequence numbers can we store if we have a two bit sequence number in decimal? Two bit number? What are the values? What values can we store if I give you two bits of memory? What values can you store in decimal? Zero or in binary? Everyone knows this. Let's just make it clear. What we're saying is that if we have two bits to store the sequence number, then the set of values that I can store are zero, one, two and three. That is how many? There are four values. And the values range from zero up to two to the k minus one. Zero up to three in this case. Two to the power of two minus one. If I have three bits, I can go from zero up to seven. Four bits, zero up to fifteen and so on. Now, our source normally has many frames to send. More than four, more than eight, thousands of frames to send. We give each frame a sequence number. But because we're limited in the number of bits we use to store that sequence number, we need to wrap around. One way to draw that, let's have a lot of frames to send. There's the first frame of data, the second, the third, seventh and more. Let's say I use a three bit sequence number, k equal to three. What we do is we start at sequence number zero. So the first frame is given sequence number zero. So inside the header in those three bits, we store the value decimal zero. What's the value of the second frame? One, easy. Seventh frame, it will be six, eight, nine, uh oh. What do we store for nine? Zero. We cannot store eight. We only have three bits. You cannot store a decimal eight in a three bit binary value. We wrap around. We come back to zero. So the tenth will be zero up to seven, zero up to seven and so on. So this is the idea. Because we have a limited number of bits to store this sequence number, we cannot go up to infinity. We reach some maximum value and then go back to zero and start again. So the ninth frame has the same sequence number as the first frame. So what sliding window flow control does is it allows the sender to send more than just one frame before waiting for an act. In stop and wait, send one frame, wait for an act. Sliding window, send a set of frames and then wait for an act. And it defines how many in that set, how many frames can I send before waiting for an act. The sender is allowed to send up to W. Let's give it a parameter. W frames without receiving an act. I send W frames and then I wait for an act, an acknowledgement to come back. When an acknowledgement comes back, I may send more. We'll see. For this to work, the sender needs to keep track of which frames it has sent already and has received an acknowledgement for. So the first, this diagram, this shows a set of frames. There's more out here, there are more in the past in order and the sequence number. So in this example, we go from zero up to seven and repeat as in this case, there's a three bit sequence number. So it's just at one time instance, the sender has sent some frames already and has already received an acknowledge and act to come back. So we say those frames are completed. I send data, I receive an act back, then that one's successful. I can forget about that piece of data. In this diagram what we illustrate is that all the frames to the left of this vertical bar are those that have been sent already and acknowledged already. Okay, that done. Between the vertical bar and this blue rectangle are the two frames in this example that have been sent but not yet acknowledged. So at some point in time, my sender, so I'm sending frames on waiting for acts. At some point in time, I have sent all of these frames and I've already received an act for these frames. That done. I have sent frames six and seven. I'm waiting for the acknowledgement for those two. The next part, the blue rectangle indicates the set of frames I'm allowed to send. I'm allowed to send frames zero up to four. I haven't yet sent them. Remember we send in order. We always send frames in order. I'm allowed to send zero up to four at this point in time. I'm not allowed to send anything to the right of them, five, six, seven, or onwards. We're going to use this type of diagram to illustrate how the sliding window protocol works. So in fact the sender records the last frame acknowledged. In this example, five, frame number five is the last one that we received an act for, which means we've sent a data, received the act. The last frame transmitted seven in this example means we've transmitted frame seven, but we haven't yet received the act for frame seven. It also means we've transmitted frame six because if frame five has been transmitted and acknowledged, and frame seven has been transmitted but not acknowledged, because we go in order, if we've done seven we must have also done six. So six and seven have been sent already. This is our window, the concept of a window. This indicates the set of frames that we're allowed to send. We're allowed to send another five frames. After sending those five, if we do, we'll have to wait for an act. Remember, stop and wait, send one frame, wait for an act. And sliding window, send a set of frames, a window of frames, wait for an act. So in some point in time we are allowed to send a set of frames. This is called the blue rectangle as indicates the window. We'll see the concept of a sliding window soon. And we normally keep track of the current window size. So in this example the last frame act is value five. The last frame transmitted is value seven. And the current window size is five, meaning five frames, one, two, three, four, five. So the window sizes the number of frames, the first two are the frame number, that is the sequence number. So the sender must record or keep track of these values all the time. As it sends data, these values will change. As it receives acknowledgments, these values will change. We'll see how they change shortly. Before we move on, in this example which we'll use we have a three bit sequence number allowing us from zero up to seven. There's a maximum size of the window that's possible. And I think it's on one of the other slides but let's write it. The maximum window size is two to the power of k minus one, where k is the length of the sequence number. In our example seven, k is three so eight minus one. The maximum window size is seven. What that means is the maximum number of frames that a can send before waiting for an act is seven. In stop and wait I'm allowed to send one frame and then wait for an act. In this instance of sliding window I'm allowed to send seven frames and then I'll wait for an act. That's how we can compare them. Why do we send seven frames? Again, to overcome this inefficiency of stop and wait. In stop and wait we send one frame and then wait a long time. Wait for the act to come back. But if I can send seven frames I send one frame, the next one, the next one and the next one. While I'm waiting for the first act to come back I'm actually sending subsequent frames leading to more efficiency. The maximum window size of seven means I'm allowed to send a maximum of seven before waiting for an act. From this diagram's perspective it's between the vertical bar and the end of the right of the blue box. It should be seven frames. Two are outstanding. I've sent these two. So I've sent two. The maximum I'm allowed to send is seven. If I've sent two then I'm allowed to send five more. That's where this comes from. The maximum seven, two have been sent, five are allowed to be sent. If I send one more, if I send frame zero then three are sent and four are allowed to be sent because the maximum is always limited to seven. Graphically you'd see that this blue box would shrink or close on this side. Let's see that in a larger example. First, the sender keeps track of these three values. Last frame acknowledged or graphically this vertical bar, last frame transmitted, the left of the blue box and the current window size, the length of the blue box. The receiver does about the same. The sender keeps track of the frame sent and acknowledged. The receiver keeps track of the frames received and that it has acknowledged. It has buffer space for a set of frames. How many? W frames. For example, a maximum of seven frames. It keeps track of the frames which have been received. So it's received frame zero up to five and it has already sent an acknowledgement for them. That done. Successful. All of them. At this point in time the receiver has received and stored in its buffer frame six and seven. It's received them. They're in the buffer waiting to be processed. It's got space in the buffer for a maximum of seven. It's currently got two in the buffer. It's got space for another five. So we'd say the window from the receiver's perspective is five. We can store another five frames. If someone sends them to us we can store them. If we receive frame zero then we'd have three frames in the buffer and space for four more. And we'd see this blue rectangle close on the left hand side. So frame six is currently in the buffer, received. If we send an aquifer frame six, that is, it's completed, you'd see graphically this vertical bar moves to here. Frame six becomes one of those which has now been sent and acknowledged. Sorry. Received and acknowledged. Let's go through and see how these concepts work in a larger example. And you have it, I know it's a bit small. I have a little bit bigger here, not much, let's go through this example. It's showing just one example of A and B exchanging data using the sliding window flow control. Is it visible on the printed copy? Just, okay. At the start, A has not sent anything, B has not received anything, and B has space in its buffer for seven frames, and A is allowed to send seven frames. So at the start, everything is initial. The maximum window is seven, meaning A is allowed to send seven frames. And that's indicated here. It's going to start at sequence number zero. So it's allowed to send zero through to six. That's indicated by the blue rectangle. And let's say it has some data to send. This indicates what we're allowed to send. We're allowed to send seven frames, but we only send whatever we have available to send. In this example, let's say A has three frames to send. It doesn't have seven, it only has three chunks of data to send, 3,000 bytes, for example. So let's say it sends frames zero, one and two. So from the perspective of the source A, from the window or the sliding window, frame zero, one and two now move into the set which have been sent, but not yet acknowledged. So that to the right of the vertical bar, but not in the window anymore. Three have been sent. A maximum of seven are allowed to be sent at any one time. Three have been sent, so we've got a window of four. The window covers four frames, which means A, if it has data, it could send more frames. But it's got no more data in this example. From B's perspective, it's initially expecting to receive seven frames. It hasn't received anything yet. Then it receives frames zero, one and two. They propagate across the network, across the link. So frames zero, one and two become frames which have been received, but not yet acknowledged. B has received them, puts it into memory, into a buffer, and is processing those frames. We haven't yet sent back an acknowledgement for those frames. So it's received three, stored those three in memory. It's got space for a total of seven, so the window is now four. It can receive another four. Any problem so far? I know there's a lot of new concepts here and a lot of details for the sender and receiver, but there's concept of we're trying to keep track of what we have sent, what we have received. That's the main idea. We'll see how it helps as we go through later. Now, here's a new concept. We send back an acknowledgement. B received zero, one and two. Let's say a process is all three of them done, finished with zero, one and two. They're completed. Let's send back an acknowledgement and send back an act telling A, I have finished with zero, one and two. The next one I expect is three. Instead of sending three acknowledgments, one, two, three, what we can do is send a single acknowledgement and say inside that acknowledgement include a number indicating the next value expected. Three in this case. B has received zero, one and two, processed and finished with zero, one and two. So the next sequence number expected by B is three. So what it does is it sends back an acknowledgement, including the value three in it. Telling A, everything before three is done. I now expect you to send me three and beyond. This acknowledgement is also called a receive ready message because it means the receiver is ready to receive frame with sequence number three. So R, R here. I sometimes just call it an acknowledgement, an act. But it's the receiver telling A it's ready to receive frame three. So from B's perspective, after sending that act, see this vertical bar here, it moves along to this position because zero, one and two now move into the set of frames which have been received and acknowledged. They're complete, they're done. We're finished with those frames. We don't have any stored in buffer at the destination. There's nothing stored. So we've got space for another seven frames. So our window is grown to seven frames because everything we've received were processed and acknowledged. When A receives this acknowledgement message look prior to that, A had zero, one and two outstanding, meaning zero, one and two had been sent but not yet acknowledged. Then we receive this R, R three message, meaning B is now expecting frame three. If B is expecting frame three, that implies everything before frame three was successful because everything happens in order. So from A's perspective, zero, one and two are complete when it receives this because they are before frame three. So the vertical bar moves up to this position, zero, one and two are done successful. Currently there are no more frames outstanding. We haven't sent anything else. So we're allowed to send another seven frames. The window is seven. So that's showing an example of how A and B, the source of destination, keep track of what have they sent, what have they received, what has been acknowledged because we need a limit on how much we send before we wait for an act and that's what the window does. Complex. Any questions? A lot of information. Clear up to here. What's not clear? What is one thing that's not clear? Don't say everything. Can you see why we go from here to here? How we change from the source from this state to this state? This diagram is just recording the status of one computer, say some values in memory, that is some variables. It's just recording what's the current state of what have I sent, what have I received, what has been act. As we send things, this state changes and as we receive acts, this state changes and it changes so that we never send more than seven frames before waiting for an act. Remember we're still trying to do flow control. Without flow control, all we do is just send, keep sending. But the problem with just keep sending without waiting for an act, we can overflow the receiver. So in sliding window we limit the number that we're allowed to send before we have to wait for an act from the receiver. So we don't overflow the receiver. In this example we're limited to seven. So if we've sent three, we're allowed to send another four. If those first three are acknowledged, then we're allowed to send another seven. Because the acknowledgement means that B has processed those, they're done. They're not stored in memory at B. So A is allowed to send another seven. In this example, it sends four. Just in this example it sends four frames, three, four, five and six are sent. It's allowed to send another three. At some stage later it receives this act, this receive ready four. See how that changes? Receive ready four means that B is now expecting frame four. A has sent three, four, five and six. It's allowed to send seven to one. It receives this act saying B expects frame four. First, that implies that frame three, which has already been sent, was successful. If I've sent three, four, five and six and B says it's expecting four, and everything's in order, it means three is done. Three was received and acknowledged. But four, five and six haven't yet been acknowledged. Because B said I'm expecting to receive frame four. So the state of A changes. Allowed to send four frames. Four, five and six we say is still outstanding. We're still waiting for an act. What happens at B? See if you can make sense of the last part of B. And note that these diagrams are just drawing the state of the source or destination at some point in time. Of course it doesn't show for all points in time. At this point in time it would be different from here. But just for clarity on this diagram, we do not show the state at every instant in time. Just in selected ones. So going back to B, the destination, at this point we're expecting to receive up to seven frames. Three through to one. Then at this point we receive frame three. So three is received. Let's say we process that frame immediately and then send back an act here. Receive frame three, send back an act saying I'm now ready to receive frame four. Receive three, we send the act before frame four arrives. That's the timing here. Receive three, send back the act. After sending that act, this is our state. Three was received but then it was immediately acknowledged. So now we expect to receive four through to two. We see the window is in size of seven. Then we receive four, five, and six. Four, five, and six are put in memory. We're expecting to receive another four, total of seven here. And of course we can keep going. That's as far as the example goes. This is just one example. Why did we send three here? Why do we send an act here after receiving one and we didn't do it up here? It's just one specific example. It could be different in other cases. What's important is given it some exchange of frames, how does that impact upon the state of the source A and B? What's the name? What's the name of this protocol? Sliding window. And you can see the window. Look at the blue rectangle. That's our window. And the window from the source is seven frames at the start. When we transmit frames, the window closes on the left side. It gets smaller. And when we receive acknowledgments, the window opens. It gets bigger on the right side. Sending frames means we're allowed to send less. The window gets smaller. Receiving acts means we're allowed to send more. The window gets bigger. It opens. And over time as we exchange frames, you can see the window opens and closes and slides along. That's the concept and where the name comes from. And it applies at both the source and destination. And as we have more frames to send, we just keep going. Of course, the sequences just keep repeating zero up to seven, zero up to seven for as long as we need. That's a lot to take in. And enough for today.