 In the previous lecture we went through two examples of drawing this time sequence diagram and calculating the timing of different events and using that to calculate throughput. The first protocol, A, just sent data frames to B, one after another. The second protocol, A, sent a data frame and then waited for an act to come back. The act is like a, the meaning in this case we can think is B, when it receives data it sends back a response saying thank you for the data, please send me some more. So we looked at those two cases and we'll go through them just to summarize again. But the question is why would we use the second protocol of sending back an act compared to the first? So this is the example we went through. I've got it, you've got it on the printout in front of you. It's actually slightly different than what we did in the lecture. It's what I did with the IT class, almost exactly the same but it may look different from what you copied down. We had a link from A to B which was two kilometers, had a data rate of one megabit per second. We had a frame of a thousand bytes of payload and 20 bytes of header. So we could calculate the transmission delay of that frame, 8,160 microseconds and the propagation delay of our link, 10 microseconds. And with that in the first protocol we drew the time sequence diagram, where we have A continuously sending frames to B. One after the other keeps sending and we looked at the timing of the different events and you'll see that A transmits frames one per 8,160 microseconds. Every 8,160 microseconds it transmits a new frame. And in this case, because there's no waiting, there's no acknowledgments, there's no errors, B receives one frame every 8,160 microseconds. It receives, starts receiving at 10 and then 8,170 and then the next frame. Each frame contains, we did some calculations, each frame contains a thousand bytes of payload. So we can say B is receiving a thousand bytes of payload every 8,160 microseconds and that gives us our throughput, which we calculated to be about 980,000 bits per second. Our data rate, 1 million bits per second throughput, 980,000 bits per second, which gives us an efficiency of about 98%. With this protocol, we deliver payload to destination B with an efficiency of 98%, meaning 98% of the time B is receiving payload, for 2% of the time, what's B receiving, for 2% of the time, what's happening? It's receiving header, that is, from the user's perspective, the header is not of any use, so we don't count that towards the throughput. That was our first protocol, but then we modified it. So you have that on one side of the sheet. There are a few spare sheets lying around for those that are arriving late. A few more here if you need. You only need one. Then we said, what if we have some different rules? What if we introduce, in addition to the previous setup, we introduced some rules that says that A must wait for an acknowledgment frame, a special type of frame, before it can transmit the next data frame. So that was one rule we introduced. So instead of A continuously sending data frames, it sends one, then it will wait. The rule for A is it must wait until it receives an act frame before it sends the next data frame. And the receiver B, when it receives data, this is a little bit backwards, but when B receives a data frame, it will process the data frame. It looks at it, does some processing, and in this example printed out, I said the processing takes some time, one microsecond. Slightly different than the example we went through in class, but this one I introduced a process delay. B receives data, processes the data, and then sends an act back. So this is a different protocol. We need to know the act size, 20 bytes, transmission time of 160 microseconds. And from that, we can draw the time sequence diagram. Let's follow it through. A sends the first data frame, data one. B receives it at time 8170. Then there's one microsecond processing delay. So you didn't see this in the lecture on Tuesday, but I introduced it in this example. Just to say that B receives it, maybe the computer is not so fast. It takes some time to do some processing on that frame. And then it decides to send back an act. So I draw the acknowledgement, the first act is this rectangle. It takes 160 to transmit, so arise back at time 8341. From after A transmitted the first frame, at time 8160, until 8181, A is doing nothing. For the first period, A transmits data frame one. When it finishes transmitting, then it waits for an act to come back. It's waiting, it's waiting, now it starts receiving the act. And at time 8341, it's fully received the acknowledgement. And we assume that to use a frame, we must fully receive it. We can't partially receive it. So at time 8341, our rule for A has been met. I've received the act for the previous one, let's move on to the next data frame. Transmit data frame two, propagates process, transmits the act to back, and it fully arrives at time 16682. So this was using a different protocol. Let's just finish the calculations of throughput in this case. We could keep drawing frames, but we won't. Because I think you'll see the pattern. Let's look at the receiver B. When it starts receiving from this time, from time 10, until the next time it starts receiving 8351. So it starts receiving the first data frame here. Then it starts receiving the second data frame at 8351, a difference of 8341 microseconds. Actually, we need to draw at least the start of the next frame. This is the third data frame. We'll not complete the procedure, but data three. When does the data frame three start arriving at B? Data frame three is transmitted at 16682. It takes 10 to get there, 16692. B received the first data frame. It started receiving at time 10, and then it moves on to the second data frame at time 8351. And then the third data frame starts arriving at 16692. What's this time period? 16692 minus 8351, 8341 microseconds. And if you keep drawing the next frame, you'll see the time from to when I start receiving data frame four is another 8341 microseconds. You can follow through if you want, and you'll see every 8341 microseconds, we start receiving a new frame. If that's hard to follow, then you could also look at the transmitter A. Every 8341 microseconds, it's transmitting a new frame. We start at time zero, then at 8341, we start sending the second data frame. And at 16682, we start sending the third data frame. The difference there, zero to 8341. 8341 to 16682, there's a difference of 8341. That is, A is transmitting one frame every 8341 microseconds. And similar, B is receiving one frame every 8341 microseconds. That makes sense that they are the same time, because why would they be different? Well, maybe if the frame was lost, but we have no errors in this case. If we transmit at some rate, then we expect to receive at that rate. We're transmitting one frame per 8341 microseconds, therefore we expect to receive at that same rate. Unless there were errors, that would be a different scenario. So our throughput can be calculated from that. B is receiving one frame every 8341 microseconds. And the throughput is the count of the payload received. So one frame contains a thousand bytes. And you'll calculate that to be 959, I think 118 bits per second. 959,118 bits per second. Data rate is one megabit per second, so the efficiency is approximately 96%. 95.9. So we've got two different protocols. The first one, A continuously sends. We got an efficiency of about 98%. In the second one, when A must wait for the act to come back before it sends, we get a lower efficiency of about 96%. In terms of throughput or efficiency, the first one is better. If we change the propagation delay in the frame size, we'll get different numbers. But comparing the same scenario, using the second protocol, we get worse performance. So the question is, why use that? Why send an act? So let's look at today one reason, and in the next lecture a second reason, why an act is very useful. Before we look at that, any questions on the calculations? Everyone has the copy of that example? And let's go back to our lecture notes and look at flow control. The reason why would introduce an act to be sent back. If we look into details about, think about the implementation of A and B. Inside those computers, one simple view is, so this is computer A, so this is B. When we say here packet from the network lab really means we have, maybe this picture, we haven't explained packet before, but this is data, or payload arrives. We have payload to send, maybe from the user at computer A press send, so that triggers payload to be generated and it's to be sent by A to B. What happens is normally the user has a lot of data to send, not just one frame, we may have megabytes to send. I have a file to transfer, I press send, and the protocol would split that into multiple frames. So what happens is that we can only transmit one frame at a time. So when we have data or payload to send at computer A, it sits in the buffer or in some memory inside computer A waiting to be sent. So this is trying to illustrate some memory or buffer or queue, and frames are put inside there before they are sent. So maybe we have five frames to send. We can only send one at a time. Each frame takes time to transmit. So the first one will be processed and transmitted. While that's being processed and transmitted, the other four would be sitting in the memory waiting. When the first one is transmitted, then A can move on to the second one and so on. It will keep transmitting. From B's perspective, as it receives, we can think as frames are received, they're stored in some memory at computer B, a buffer, and then they are processed one at a time. So B receives a frame, looks at it, does some processing, and then sends it up to the user that needs to receive the data. Think of the packet to the network layer really means sending up to the user who's supposed to get the data. Processing the frame takes some time. So similar, if we receive one frame, it goes in the buffer, and then we process it, while we're processing, we may receive a second frame. And that will have to wait in this memory space until the first one is processed, and we process the first and then move on to the second. So we use some memory to store the frames that have arrived, and we haven't finished with the previous one yet. Now, memory is not infinite in our computer. We have a finite amount of memory available, and in some communication devices, it can be quite small. So there's a limited number of frames that we can store, especially in the received memory of computer B. And what may happen if we have a computer A is a fast computer, computer B is a slow computer, what that means is A can transmit frames, process and transmit as fast as possible. But as they arrive at B, they're stored in the memory, the first frame is processed, while that's being processed, the second frame arrives. But because B is slow, it's still processing the first frame. The third frame arrives and is stored in the memory at B, and if B is very slow, it's still processing the first frame. And the fourth frame arrives, and the fifth frame arrives at B. And what can happen is that because this memory space is limited, if we don't control how fast A sends, then if B is very slow, it's processing the first frame, and the memory is filled up with other frames that arrive while it's processing the first one. Let's say the memory can store six frames. So we've got six frames in the memory. The memory is full. We're processing still the first one. What happens when the seventh frame arrives at B? What does B do? The CPU is working on the first frame. It's got its memory full with the six other frames, and then another frame arrives. What does B do? What's this computer going to do? Hang, crash? Well, hopefully it's not that badly implemented, but something bad's going to happen. Where is that frame going to go? It cannot be stored in memory because the memory we've got allocated for this receiver is full. So we cannot store it. We cannot process it, so we discard it. Basically, the bits come in, and the receiver just ignores those bits, ignores that frame. So we say we drop the frame. The receiver gets the frame, but it's got nowhere to save it internally in memory, so it's discarded or dropped. It's as if B never received that frame. Now, that's a problem. That's an error in our communications because A transmitted the frame. It got to B. There were no errors on the link, but B's memory was full such that it couldn't process that frame. It had nowhere to save it, so it must just discard that frame. We don't want that to happen because that's a waste of resources. Transmitting a frame, but not using it at the receiver is very wasteful. It means eventually later, A has to retransmit that frame because if we want to get the data, we'll have to send it again. So this is a problem in that we overflow the memory at the receiver, and especially occurs when you have two computing devices, the transmitter is much faster than the receiver. It depends upon the amount of memory, but it may happen. How do we stop that from happening? If we don't want to have to discard those frames that arrive, what do we do? Increase the memory, okay? Make the memory larger. How much larger? Well, again, what if I have a supercomputer sending to an old PC? That old PC only has, and it's not the memory of just of the computer. It's the memory allocated to the receiver, like the LAN card, which can be quite limited. We can't just increase the memory to be an infinite amount. We, for practical reasons, we need to have a smaller or a fixed amount of memory. We cannot allocate an infinite amount of memory. Given that, we can still have drops. So what else can we do? Sorry? Compress, compress the file, make everything smaller. But again, we still may get this situation that the memory gets full. Limit the speed at which A sends. Tell A to slow down. If B is taking a long time to process and it's realizing I can't process as fast as you're sending me, then somehow we'd like to get B to tell A, please slow down. You're sending too fast. If you keep sending like this, I'm going to have to lose some data. So we send some feedback back to A, saying slow down your sending and A will slow down its sending such that hopefully we will not overflow the memory at B. This is called flow control. We control the flow of data from A to B. And it's in fact B that controls the flow. How could B tell A to slow down? It needs to send a message to A, maybe an acknowledgement. And inside that acknowledgement, it has the meaning slow down. Not just the meaning thank you for the data, but the meaning slow down. Maybe don't send me more data. So we'll see how that works. So if the sender sends too fast for the receiver, the buffer or the memory may overflow and that's a problem because data will be lost and we need to retransmit and that reduces performance. So we want to generally avoid that problem. So flow control is a mechanism to try to prevent that buffer overflow at the receiver. So let's look at two main techniques for flow control and in the following discussion, we'll assume that there are no errors on the link. That is we'll always get data transmitted successfully across the link. Flow control is very important in links. It also turns out to be very important when we're communicating say from one PC to any other computer on the internet. So not just across one link, but across many links. Whenever you are downloading a file, accessing a webpage, sending an email, you're usually using a protocol called TCP and that implements the flow control mechanisms we're about to talk about. So let's look at the first protocol for flow control and the name of the protocol is called stop and wait flow control and we've seen it already. It's our protocol, we introduced the second one in the example for the time sequence diagram. There are two frame types, data containing the payload and acknowledgement and act, which acknowledges the receipt of the data. So we've seen that we send data from A to B, B sends an act back. The rules in this protocol, the source, for example, A transmits a data frame. The source must wait for an act frame before it sends the next data frame. I'm not allowed to send the subsequent data frame until I've received an act for the previous data frame. The destination when it receives a data frame it replies with an act if it's ready for more data. If the destination receives a data frame and has its memory full, it's not ready for more data, then it doesn't send an act. It waits until it's processed the next frame, once that's done then it sends an act saying I'm ready for more. So here we'll think the act doesn't necessarily mean thank you, it doesn't mean slow down, it means I'm ready for more data, one more data frame. So essentially if the destination is very busy processing frames it can stop the flow by not sending an act, delay the sending an act. And here's an example of the stop and wait flow control protocol. A wants to send data to B and at computer A at different times it has some data to send. Why does data have, why does A have some data to send? Well maybe you think the user at computer A press send. When does the user press send? Well depends on what they're doing. So we can maybe think it's random, it changes between when we have data to send from A to B. So this is just one example, it can be different than this, but tries to illustrate the concepts. So at the start, let's say we have one data frame to send from A to B. When I say data one arrives, it means the user has pressed send and has a single piece of data to send to B, data one. So A to get started, it transmits the first data frame, data frame one is transmitted, it propagates, it arrives at B, B processes that frame and once it's ready for more data it will send an act. So in this specific example I say the data frame one arrives at this time, but maybe the buffer is full. Maybe based upon previous exchanges the buffer is currently full. So B is waiting and then it processes the data. Maybe it takes a long time to process data frame one and once data frame one is completely processed we can think the receiving user at B gets the data delivered to them and then B can send an act back to A saying I'm ready for more data. B transmits act frame one, it propagates back because B is ready for more data because it's finished processing data one. Let's come back to A. A transmitted data frame one and then it does nothing. The rules are that after transmitting the data frame you must wait for an act to come back. You stop and wait for an act to come back. So it's doing nothing A, it's doing nothing, doing nothing, but at this time the user at computer A wants to send more data. The user wants to send a second piece of data, data two, so I denote that as data two arrives. But A is not allowed to send it yet because the rules are we can't send a second frame until the first one has been act. So this data two is stored in memory A. Not until we receive the act for frame one then we can send data frame two containing the second piece of data. We transmit data frame two, it arrives. In this example I've shown that maybe the processing for whatever reason is much faster in the second case. The processing depends upon the CPU and other factors. So it may vary. In the first case it took some time to process data one. In the second case data two is processed almost immediately. Receive the data frame, process, deliver to the user and then send back an act. Back to computer A, it transmitted data frame two it's waiting for an act, it's waiting. It receives the act. It has no data to send at this stage. We only send data if we have something to send. So it has no data to send but then maybe the user presses send the third time triggers data three to arrive and then we transmit data frame three and we could keep going. So I've only finished the diagram there. This is stop and wait flow control quite simply it allows the source to just send one frame before getting acknowledgement from the destination B. Any questions on the procedures for stop and wait flow control? What is that time before data three? What happens here? Remember we only send a data frame if we have actual data to send and I've denoted that by these data arrives arrows. A has some data to send. Transmits a data frame, it's waiting for an act. It now has a second piece of data to send. The rules say it is not allowed to send because it hasn't received the act for the previous one yet. We receive the first act, we send the second data frame. Data two is sent here. We receive the act for data two. We've got nothing to send so we don't do anything during this period. That's all we're saying. In this specific example, we've got no data to send so A sits there waiting for data to arrive. When data three arrives, then it transmits data frame three. Whether that case will occur depends upon when does the user want to send data. And we cannot predict that. That depends upon the application of the user, many different factors. So I've just drawn one particular example where in this case, data two arrived while we're waiting for the act. Here, data three arrived after we'd received the act. Any other questions on this general procedure? So for data two arriving, so here, again, this is just one example and remember computer A and what the user does at computer A and the processing at computer B may vary over time. So what I tried to illustrate, in this case, B receives data frame one. It starts processing. For some reason, the processing is very slow at this time. Maybe the CPU is working on something else. So it takes a long time to process. Once we've finished processing data one, then we tell A by sending an act, we're ready for more. In the second case, we receive data frame two and just let's assume in this case that there's nothing else for the computer B to do. So it receives data frame two, processes very, very quickly, almost immediately, so it can immediately send an act. That's all. It's just a different scenario where here, the B was busy, here it's not busy. It would depend upon different factors of what happens there. Any other questions before we look at the performance? So the exact exchange of frames is going to depend upon when does data arrive? How much processing does it take? And of course, transmission time, propagation time, like we've calculated before. Let's, one thing we're often interested in is what's the best performance we can achieve? What's the upper limit that we can get? Assuming that we've always got data to send, we never have this scenario that we have to wait for data. Assuming that A has an infinite supply of data to send, then what's the best throughput that we can achieve using this stop and wait flow control protocol? Well, you know the answer in one particular case. It's the example we went through. This one is stop and wait. Remember, from our example, A transmits data. It stops and waits for the act to come back. When the act gets back, it transmits the next data frame. It stops and waits, waiting for the act. We get the act, we transmit the next data frame. So this was an example where A always had data to send. We had a processing time of one microsecond. So B received the data. It waits, or processes for one microsecond, then sends the act back. Under that scenario, the best case performance, the best case throughput, we calculated to be this 959 kilobits per second. If we had different processing time, different transmission time, and so on, we'd get different performance, but we could calculate by doing that analysis. So stop and wait flow control is quite simple. It essentially assumes that B has buffer space for one frame at a time. That is, let's say B has enough memory to store just one frame. What happens is A transmits a frame. It's stored in memory. It's processed. When it's finished processing, it's removed from memory. And then an act sends back to A saying, I'm ready for one more frame. A sends the next frame, stored in memory. Well, after it's processed, it's removed from memory. So an act comes back saying, I've got space for one more frame. Please send me another. Let's stop and wait. It's very simple in that the receiver doesn't have to do much processing. It doesn't have to store much. It just, it has to process a frame, but in terms of keeping track of what it's received, it's very simple. It's simple from the transmitter. Just transmit a frame, wait for an act. And it uses just a small amount of memory at the receiver B. We need enough memory to store one frame. Let's see some issues with the performance of stop and wait flow control. This was the example we went through. So we had 1,000 byte messages. We added a 20 byte header. The act frame was 20 bytes. The link two kilometers, one memory, the link two kilometers, one megabit per second, the velocity, and we've calculated the throughput. So we have that answer already. Let's modify and let's say we have a different link in this case. Instead of two kilometers, let's say it's 2,000 kilometers. Everything else the same, but instead of a two kilometer link, it's 2,000 kilometers. Calculate the throughput in that case. You can either try and draw the diagram or you may quickly see how to get the throughput without the diagram. But the exact same scenario, but the link has changed to 2,000 kilometers. Quickly calculate the throughput and see what's the best case performance for stop and wait flow control in that scenario. Everything else is the same as the example you have on that handout, except the link has changed from two kilometers up to 2,000 kilometers. What's the transmission delay of our data frame? Go back to your example and look at the transmission delay. The data rate's still the same, the frame size is still the same, so the transmission delay is still the same as the previous example. Changing the link distance doesn't change the transmission delay. Still 8160. And the transmission delay of the AC, again it doesn't change from the previous example. So those are on the handout. Processing delay is still one microsecond. I'll say the processing of B. The propagation delay. In the previous example, we calculated it to be 10 microseconds. We've increased the link distance by a factor of 1,000. 1,000 times longer means it would take 1,000 times as much time to propagate. So instead of 10 microseconds, the propagation delay is going to be 10,000 microseconds. When we had two kilometers, the propagation delay was 10 microseconds. Everything else is the same. Now with those numbers, you can try and draw the diagram or at least draw one transfer and calculate the throughput from that. Draw a time sequence diagram and quickly calculate the time to transfer one frame. At least one frame. The processing time at B, at the destination B. It's very similar to the come up to the. So draw the time sequence diagram. We can quickly draw that. We don't have to draw for many frames. You'll quickly see the situation. It essentially looks the same. Now be careful that I haven't drawn the diagrams to scale, okay? So it's hard to capture the scale in a small space. But we get to transmit the data frame. Takes 8, 1, 6, 0. It propagates. We process and then transmit an act. Maybe I can try and capture the scale a little bit better. The frame propagates. Why did I draw this as a big space? I tried to capture the fact that in this case the propagation delay is 10,000 microseconds. The time from here to here will be 10,000. So it'll bring us to 18,160. So even though I cannot draw the scale, you can try to see that in this case with a larger propagation delay, it's gonna take longer for the data to get there and eventually longer for the act to come back. We'll process for just one. Transmit an act. And then it's gonna propagate back. 8, 1, 1, 6, 0. Plus one for processing. Plus 160. Brings us to this time. Plus another 10,000 to get back. And then we send the second data frame. Transmit, propagate, process, act, propagate back. And then the third. And you'll see it's the same time for each phase, each transfer. Each transfer of a data frame from when we start the transmitting the data frame until we fully receive the act will take 28,321 microseconds. The next one will be the same. Then another 28,321. So instead of drawing them, we're gonna observe that it will essentially transmitting one frame every 28,321 microseconds and similar receiving one frame in that time period. A transmits one frame every 28,321 microseconds. No frames are lost along the way. So if we transmit at that rate, we must receive at that rate at B. Everything that is transmitted is received. So we can say B receives, Rx for receive. From that we can calculate our throughput. B receives one frame. One frame still contains a thousand bytes. And the time it takes, we know. And someone will calculate that for me. 282475. Correct, 282475 bits per second. Better or worse than the previous example? Much worse. In the previous case we had about 960,000 bits per second. Now we get 280,000 bits per second. Or an efficiency of about 28%, approximately 28%. So this, all right, let's first make sure everyone follows the calculation. Compared to the previous example, the one thing that changes the propagation delay. It went from 10 up to 10,000. So the picture looks the same, but if you try to draw it to scale, you'll see it's much different, in that transmit a data frame, propagate for 10,000, process, transmit the act, propagate back for another 10,000 microseconds. Gives us a total time to do one transfer of 28,321 microseconds. And if we kept drawing the other frames, you'll see that that happens repeatedly. So the result, A is transmitting one frame in that duration. Everything that is transmitted is received. So if A transmits at that rate, B receives at that rate. And one frame contains 1,000 bytes. Therefore we get a throughput of about 282 kilobits per second. Data rate, one megabit per second, efficiency 28%. Much lower efficiency than our previous case. And that's a problem with stop and wait flow control. In some scenarios, it can be very, very inefficient. This is one case where the efficiency, we've got a link, but we're only using a less than a third of the time to deliver our data. And you can see in the picture, A transmits data for about 8,000 microseconds. Then it waits for around 20,000 microseconds, waits for the act to come back. So it spends a lot of time not sending anything. And that's why it's inefficient. In general, it's due to the fact that the propagation delay is much larger than the transmission delay. If we assume the processing delay is usually quite small compared to the other components, processing delay is often small compared to transmission and propagation. The act frame is usually quite small compared to the data frame. Usually the act is as small as possible, whereas the data has a lot of data in it, so it may be much larger. So generally the act transmission, the processing are quite small. The main contributors to the total time are the data transmission and two propagation delays. So we can get good performance, good efficiency, good throughput if the data transmission time is much larger compared to the propagation time. We get poor performance if the propagation delay is much larger compared to the transmission time. So in general, the trade off occurs with stop and wait. It depends upon the transmission time and the propagation time and the ratio between them. If we made the link 20,000 kilometers, times by 10, then the propagation time will be much larger and our efficiency will go down. Or if our data frame was smaller, in this case, the data frame was not 1,000 bytes of payload, maybe 500 bytes of payload, then the transmission time would go down and our efficiency would go down. And another one for you to consider, if the data rate was increased, then the efficiency also goes down. Using stop and wait flow control is only appropriate when we have a good ratio of the transmission delay to propagation delay. If the propagation delay is very large, then it's very inefficient. We calculated the efficiency, you can derive a general equation that works for calculating the efficiency. The transmission delay of the payload, the 1,000 bytes in our case, divided by the transmission delay of the data frame, which is payload plus header, plus the transmission delay of the AC, plus two propagation times. This one doesn't include processing. We could add that in if necessary. Assuming we can't control the header or the AC, that is usually quite small and we cannot change them, then really it depends upon the payload size and the propagation delay. Stop and wait is very inefficient if the links have a high data rate, like optical fiber, a long distance, like transmitting up to a satellite, 36,000 kilometers, or a small data frame. Other things being the same, if you increase the distance, efficiency goes down, if you reduce the frame size, efficiency goes down, or if you increase the data rate, efficiency goes down. Any questions on stop and wait flow control before we look at an alternative? I don't want you to remember the efficiency equation because you can calculate it quite easily. Draw the diagram and calculate the efficiency. HDR means header is sure for header there. In our case, we said since A transmits the frame in 28,000, where is it? If A transmits one frame every 28,321 microseconds, that's the rate at which it's sending, then because the frames are always delivered, then B must receive at that rate. Now, it would be different if we have, say, frames transmitted but not successfully received. If we had errors in the link, then A transmits a frame, B doesn't receive it. Then we'd have a different scenario and we'd need to consider what happens if there's an error. So far, we're considering there are no errors in the link. Everything transmitted is received. So that's why they're the same processing time here. Processing time. Remember back to delay. Transmission, propagation, and processing of three components of delay we've considered so far. There's a fourth. Processing, we said, is very unpredictable. Processing depends upon the device. The capabilities of the CPU, the memory, the software running on it, what it's doing at that time. So we cannot predict very well what it is. So often, though, it's very small. Often it's very small compared to the other values. With computers nowadays, it's very small. But sometimes, well, it's not zero. But in our analysis, often we'll assume it's so small that we can make it zero. So in my examples, in this one and the previous ones, I said there was a small processing delay at B, but A had no processing delay. Maybe A was a faster computer. We could have a processing delay at A, but we cannot predict what it will be. Any last questions on stop and wait? How are you going to improve the efficiency? What can we do? Anyone? What are you going to do? We want better than 28%. Assuming we can't control the link distance, we're given a link distance, we're given a data rate, and we have to use a particular frame size. That is, let's say we can't control those things, how can we improve the efficiency? Increase the velocity of our signal, won't help. We can't control that. So the link characteristics are given. The frame size are given. How can we increase our efficiency? Maybe think about what's the problem here. Reducing the frame size, yes, that helps, but no, that doesn't help, but let's assume we can't change the frame size. The protocol defines the frame size, we're stuck with that, but we want to get better efficiency. The problem, we can't change the propagation delay, we can't change the transmission delay. The problem is A spends a lot of time waiting. In stop and wait, we send, we stop and wait, and then we transmit the next one, and then we wait again, and then transmit, that's the problem here. The way to improve is to allow A to send the next one immediately. Remember back to our very first protocol, A continuously sent, but we must have some limit on A. The reason for limiting the speed which A sends is so it doesn't overflow B. So we'll introduce the new protocol that allows A to send more than one frame before it waits for an act. In this stop and wait, the rule was transmit one, stop and wait for an act. More generally we could say allow A to transmit N frames, maybe N is two, transmit two frames, then stop and wait for an act. Or transmit three frames, one, two, three, stop and wait for an act then. The more we're allowed to transmit before we have to wait for an act, means we'll spend less time waiting, more time transmitting. More time transmitting will give us better efficiency. So instead of limiting to just one frame, we'll allow N frames to be transmitted. Let's draw an example then look at the scenario, look at the general protocol. The way to improve, this was our previous stop and wait protocol. To improve, let's change so for example A can send three frames. I haven't drawn it to scale very well. A modified protocol, no longer called stop and wait, will allow A to send in this specific example three frames before it has to wait for the act. Transmit data frame one. While that's propagating across the link, transmit data frame two and then data frame three. And in this case, we'll stop there and then wait for the first act to come back. From A's perspective, it will be spending more time transmitting data than waiting. That's good for efficiency, less time doing nothing. After transmitting the third frame, we wait for the act. So here the limit was three frames. Stop and wait is essentially one frame at a time. Let's change it to three frames at a time. We'll consider general cases later. And when the first act comes back, an act will allow A to send one more frame. So if we put numbers to these, this would be data frame data one. We'll not draw the entire timing, data two. This is no longer stop and wait. This is a modified protocol where we can send three at a time. Data one, two, three, stop. Now wait for the act. Get the act allows us to send one more. Data four. If everything's the same, then essentially after sending data four, we should give the act for the next frame. Allow A to send three frames. Wait for the first act to come back. And the same is stop and wait. Receiving an act allows you to send one more. We receive the first act back. We can transmit data frame four. Then we receive the second act back. Receiving an act allows you to send one more. We transmit data frame five. Then we receive the act for frame three back. And transmit data frame six. And then wait for the act for frame four. And keep going in that way. And we'll see some general when there's more than three frames. But the idea, allow A to send more than one frame at a time. Here three. We could have four. We have ten. And the value will be a parameter of our protocol. This will give us better performance, better efficiency because in 28,321 microseconds, we're not sending 1,000 bytes. We're delivering 3,000 bytes. Three times the efficiency. Instead of 28%, we'll be up to 84%, 85%. So let's go back and look at the definition of this new protocol and then we'll go through some more examples. We'll not complete this example because we've got some others which are a bit clearer. What size frames to use will return at a later stage? A new protocol. Stop and wait was protocol number one. A new one called sliding window flow control. Allow multiple frames to be in transit at a time. In our example I just showed, I allowed three frames to be sent one after another then wait for an app. So that's called sliding window flow control. We'll use a different example to explain it. And you have it, if you flick through some pages, you have a more detailed example. If you go forward a few pages in your handouts, you'll see this picture. It may not be as colorful though. Keep going. Many pages. More, more, more. On page number 21, unfortunately the green doesn't come out in the black and white. Page number 21 in your handouts. At this stage we'll not do the calculations, we'll focus just on the exchange of frames and then later we'll come back to the numbers assigned to them. I think I have a better one. Let me just explain what it's drawing. Focusing on the time sequence diagram on the right. A and B, in this specific example, A is allowed to send three frames before it has to wait for an app. And the timing is such that the transmission time of one frame is 100. Easier to calculate than our 8.160. So it takes 100, transmit one frame, and the propagation delay is 200. So transmit the first frame starting at 0, finish at time 100, it arrives at time 300, 10 to transmit an app, 310, 200 to propagate back. The first app comes back at time 510. So let's follow through. Here we have transmission delay of 100, propagation of 200, transmission of an app is 10. No processing to keep it simple. So in normal stop and wait, it'll be one frame, app, next frame, app, and so on. But with sliding window, it will allow us in this case to send three frames. One, two, and three. They will propagate across. The second frame will arrive at 400. The third frame finishes at 300. So it arrives at 500, 10 to transmit the app. Then the app will come back at 710 if you calculate the values here. So I've tried to draw a number of those values. And the rules are we can only transmit in this case three frames before we have to wait for the app. Three frames are sent. The first app comes back at 510. 100 plus 200 plus 10 plus 200 propagation, 510. Receiving an app allows us to send one more frame. So A has received one app. It can send the next frame. It starts at 510, finishes at 610. When does the app for the second frame come back? At what time is the app for the second frame? Labelled frame with a number one in the picture. When does the app come back? 610. The frame here, transmits, starts at 100, finishes at 200, arrives at 400, 10 for the app. 410, 200 to come back. The second app gets back at exactly 610. Exactly when we finish the first, the fourth frames transmission. That's here. So A received an app, allowed it to transmit this frame. Then it receives the second app, allowing it to send the next frame. Then it receives the third app at time 710, allowing it to send this sixth frame. Then it waits again. Waits for the next app. The next app will be the app for this fourth frame. The solid line we see will come back at 1020. Allowing it to send one more. Then the apps for those other two in this batch will come back. That's where we finish up. It will keep going in this phase of send three, wait for the first app. Send three, wait for the app of the first of those three. And in time, that takes 510 time units to get the first app back, and we get three frames sent. With this protocol, we don't just send one and wait for an app, we're sending three and wait for an app. And it's three times better performance than stop and wait, because we're sending three times as much data in the same amount of time. We're sending three frames in 510 time units. Stop and wait would send just one frame in 510 time units. So that would give us better performance using this protocol. Better efficiency. Even further, how? Send four frames. That's limit A to sending four frames. And you would see the fourth one would finish at 400. We'd wait for 110 time units before the first app comes back. It's not quite the best case. What if we send five frames? 100, 200, 300, 400, 500. Then wait for just 10 time units for the first app to come back. We can do better. Send six frames. Transmit six frames. The first app comes back while we're transmitting the sixth frame. In essence, we spend no time waiting for an app to come back because we're always transmitting frames. That is, an app comes back while we're transmitting the last batch of frames, which is the highest efficiency we can achieve, always transmitting. That's the best we can do. So in fact, the number of frames that we transmit in a batch, here it's three. We said we can go better with four, even better with five or six. This is called the window, or the maximum window size. It's a parameter of the protocol. The larger the value, the better the efficiency. But up to some point, if we go up to seven, you'll see that it's no better than six. And eight is no better than six. So there's an upper limit that we should try to achieve. So that's sliding window. It can be better efficiency than stop and wait, but it's more complex. The complexity comes in that the sender and receiver keep track of the frames sent and received. They need to count the frames. And that is more complex, and what we'll go through in the next lecture is how they keep track of the frames sent and received. And that's what these boxes on the left are going to illustrate. So that's the concept of sliding window. Send n frames, then wait for an act. That gives better efficiency. Next lecture we'll go through this example in more detail and see the other trade-offs. We'll stop there and we'll see you next week.