 So with error control, we need to retransmit. So we need a mechanism for detecting when to retransmit. And with stop and wait ARQ, it's very similar to stop and wait flow control. Send data, receive the act. Send data, receive the act. Of course, we care about what happens when things go wrong. Say when we lose data, yesterday we got to the case of, if I bring it up, it was yesterday. Yesterday we covered this case of A transmits data. The first one successful, everything's OK. And the second one, we lose data. The second data frame transmitted didn't get to B. The way that A knows it doesn't get there is it doesn't get an act within time. So it's waiting for an act, it's waiting, waiting. No act comes back after some predefined time. So we must know what this interval should be. Our timer expires and that triggers a retransmission. The same data frame, data frame two is retransmitted. And in this case, it gets delivered successfully. So this was an example of lost data. Then we move to another case when there's a lost act. Same initial case, data, the first piece of data is delivered. B gets the first piece of data. The second one is transmitted, gets delivered. B gets the second piece. The act comes back, but something goes wrong. Any questions on stop and wait ARQ? The act comes back, but doesn't get to A. Something goes wrong with our link. There's an error. So that again, from A's perspective, we're waiting for the act. Waiting for the act. Act doesn't come within our time out period. So that triggers a retransmission of data two. And the problem that we get is that when that data two arrives at B, the question we raised yesterday is what do we do with it? What should B do with that data? And the right response is we should ignore it. Because if B accepts this data, it gets the data, the message in there, and it uses it in some way, then that would be wrong because it's the same data. It's just a retransmission of the previous one. So we need some way for B to realise, ah, this piece of data I just received is in fact a retransmission of the previous one, and therefore I should ignore the second case. If we didn't ignore it, then, depending upon what the application was that was transferring the data, but things will go wrong. The example I used yesterday, if the data was saying, transfer 10,000 baht to Steve's account, the bank receives it here, transfers 10,000 baht into my account, but then receives a copy of that same message again. And the bank doesn't know, is this a second amount of 10,000 baht that should be transferred or is this just a copy of the first one? So it gets confused. So we must have some way to detect that this data is a retransmission of this one. Then we can ignore it. We still send the act. So B receives the data, realises, ah, this is a retransmission. Maybe A didn't receive the act, so they retransmitted. So let's send them the act again and let's hope it gets through. So we still send the act. How does B know to ignore this one? How does B know to ignore this data message? What is B gonna do to realise that this is the retransmission of the previous one? We can't just check the data. The data is a sequence of bits. It could be that the sequence of bits we receive is identical to the previous one. That could have been intentional. We don't know. So we need some other way for B to know that this is a retransmission. What can we do? How can B know that this data message is the same as the previous one? Sequence numbers. Give each data message a sequence number. So a retransmission will have the same sequence number as before. And in that way, B will know, ah, this is a retransmission. So in fact, if you recall back to stop and wait flow control, there were no sequence numbers. We only introduced sequence numbers with sliding window. But in fact, with stop and wait ARQ, we actually need sequence numbers. And I'll start to draw them. And with stop and wait, we only need a one bit sequence number. Zero, one, zero, one, zero, one. So the first one, let's say this is sequence number zero. Even though it was the first piece of data, the sequence number we gave it was zero. That was included in the frame. When B sends the act back, what acknowledgement number does it include? This is a sequence number. The sequence number is what we call the number we include with the data. And when we send an act back, we include an acknowledgement number, the act number, putting out a space. So each frame now includes a number inside the header. Remember, the act says, what's the next number I expect? B received data with sequence number zero, therefore sends an act saying, I now expect data with sequence number one. A receives that act, sends the next piece of data, which of course has sequence number one. B receives data with sequence number one, sends an act with act number, what's the act number here? Zero, the number after one is zero when we use just one bit. With one bit, it goes zero, one, zero, one, so on. So the next one, expect it is zero. It doesn't get to be though. Oh, sorry, it doesn't get to A. So after a timeout, A retransmits the previous one. So it's the same sequence number. It's still sequence number one. Now when B receives this data frame with sequence number one, it realizes this is not in order. B received data with sequence number zero, the first one, and then received sequence number one. B is expecting sequence number zero. That's what it said in the act. I expect to get zero. If I receive one, then I know that that's not correct. And that's how B knows to ignore this data. B is expecting sequence number zero, but it gets one, so it ignores that, but still sends back an act and what act number? What is B expecting? What did B just receive? Look at the sequence numbers, the purple ones. B received sequence number one, it's expecting zero. It receives sequence number one again, it ignores that, realizing this is out of sequence. It should be zero, one, zero, one. Here it's one, one, ignore this one, and still tell A, I'm still waiting for zero. I'm still waiting for data with sequence number zero. So the next piece of data, when A receives that, A realizes, okay, my data with sequence number one was successfully delivered, I can move on. Zero means that one was delivered. The previous number to zero is one. So now the next data we send, the next one we send, the third frame, the third piece of data has sequence number, what's sequence number? 50% chance of being correct. Zero, when we send back an act saying, I expect to receive zero, then of course, if everything's okay, the next one we'll send is zero. Assuming it's okay, it gets delivered, data gets delivered, eventually act comes back. Act number, data zero received, say thank you, I now expect data with sequence number one. So the act number is one in that case. And upon reception of this data frame, the third data frame, B receives and processes that data. So we haven't shown the processing time, but data three has been successfully delivered here. So we do have sequence numbers in stop and wait ARQ, and you'll go back and you'll fix the example from yesterday and add sequence numbers. So I didn't draw them yesterday, but the reason we need them is because of this case, the lost act, just see what happened. Sequence number, actually we'll go back to the original case first, then we'll go through that one. Here in this diagram, I included the sequence numbers. In the normal case, you'll see data sequence numbers, zero, one, zero, one, zero, one, just keep going like that if everything's okay. And the act, one, zero, one, zero, and so on. So that's the normal case. In the case of a lost act, the same, zero, data one, but when we retransmit, of course, we don't change the sequence number. Original data, retransmission of that previous one, and we know that by the same sequence number. And that's how B knows to ignore that data frame. Any questions on stop and wait ARQ? And the example, look at the example you drew yesterday. Does it always begin with zero? The sequence number, no, not necessarily. It must start somewhere a sequence number. It makes sense to start at zero, okay? But in theory, it could start at one. But it would be agreed upon by the actual devices which are using this protocol. So similar with sliding window in the previous topic, remember we had sequence numbers say zero to three or zero to seven. It's natural to think that they'll start at zero. But in practice, they don't have to. We can start anywhere in that range. And in fact, in practice, in some protocols, they don't start at zero. But in the examples that I use, I usually start at zero because when we start counting, we start there when we use binary at least, okay? When the ACQ is, the sequence numbers, of course, we increment for each new data frame, okay? We start, first data frame is zero. So just think of the sequence number we're incrementing. First data frame is zero. Next one is one. The next one is, well, we don't have two with just one bit. So we just got a one bit sequence number. So the next one, instead of two, we go back to zero. We wrap around to zero. And then we go back to one, increment to one, increment, we don't have two, so back to zero. So it's really just alternating. And the ACQ is just saying, okay, be received data with sequence number zero, send back an ACQ saying, I expect, in the next data delivery, sequence number one. So the ACQ number is what's expected. Does that address your question? So this is like sequence numbers in sliding window, but just one bit. Sometimes this is called an alternating bit protocol, that is the data frames that just alternate, zero, one, zero, one. Maybe just go back to the one from yesterday, where I didn't draw sequence numbers, but we can add them just for completeness. Same as before, sequence number, let's, sorry, sequence number zero, ACQ number one. Next data, sequence number one. Data was lost. Next data, what's the sequence number here? One, because it's a retransmission of the previous one. When B receives data one, it sends back ACQ number zero. Next one I expect is zero and so on. But we need sequence numbers to keep track, and especially if there's a lost ACQ for B to know that there is a retransmission. Any questions on stop and wait? That's the basic stop and wait, ARQ. ARQ means automatic repeat request. It's the general name for these mechanisms for retransmissions. We automatically repeat the data transmissions when we detect a loss. In this case, we detect a loss from a timeout. A few issues which is related to this and the next protocol and in fact in general to these types of data link protocols. One is the timer. So the way it works is after A sends the data, it starts its timer and once it reaches some value, we say the timer expires or a timeout occurs and that triggers a retransmission. What value should this be? How long should we wait for the ACQ to come back? So that's a parameter that we need to have. What is this period here? What do you think it should be? How long should A wait for the acknowledgement to come back? Any ideas? Compare with the previous round. You mean what happened in the previous successful transfer? Okay, and how would you compare compared to what value? Propagation for the data to get to B plus the ACQ to be transmitted and get back, okay? So that's, we should wait long enough to give B a chance to get the ACQ back to us. So we send data. In the normal case, we know there's going to be some propagation delay, some processing possibly at B, some ACQ transmission delay, plus some propagation back. So A knows there's going to be some delay before I get the ACQ. So I need to wait long enough to give a chance for that ACQ to get back. So the timeout interval must be large enough such that we can get an ACQ back in the normal case if there was nothing lost. How long should that be? How would you calculate that? If you're computer A, how do you calculate that time? Round what? But what do you need to know to calculate that? Let's say it's your phone. Your phone is communicating to the base station using some mobile technology. There's a link. You need to, when you transmit data, you need to know how long it takes for the ACQ to get back. What does it depend upon? Well, propagation delay depends upon the distance. Does your phone know how far away you are from the base station? Not very accurately. It may be able to estimate, but generally not. Sometimes we don't know what the propagation delay will be. Now it might be one microsecond, tomorrow maybe 10 microseconds. So in different situations, the propagation delay may not be known in advance. That's a bit of a problem. In this example, I didn't draw the processing delay, but there is a processing delay from when it's received until when the ACQ comes back. How long does it take a computer to process a frame? Estimates, anyone? How long does it take a computer to process, say, a thousand-byte data frame? Is it the same for every computer in the world? No. The processing delay if my laptop is computer B, it may be very short. If your phone is computer B, it may be very long. And A doesn't know that. So we cannot easily predict what the processing delay will be. So in fact, it's quite hard, in some cases, for A to know how long it takes to get the ACQ back. So usually we need to estimate what the maximum time would be. So under the worst conditions and set the timeout interval to be larger or slightly larger than that, we need to give some time for B to process and send the ACQ back. So let's say the propagation delay was 100 milliseconds. The transmission of the ACQ was 10 and the propagation back was 100. What should the timeout be? 100 there, 10 to transmit, 100 back. How long should I wait? At least two propagations plus the ACQ. So at least 210 in that example. Larger because sometimes that maybe there's a processing delay which we cannot predict. So usually we make it slightly larger. Why not? Okay, instead of 210, why not make it 1,000 or 1 million milliseconds? I expect the propagation to be 100. Maybe the ACQ transmission changes. Why not make the timeout interval very, very long? What's the problem? Let's say it takes about 210 milliseconds, normal case to get the ACQ back. But I wait for say 1,000 milliseconds before I retransmit. What's the problem with that? Why not have a large timeout interval? Look at our efficiency when there is a loss. In this case, send data. All this time when I'm waiting for my timer to expire, I'm being inefficient, I'm not doing anything. I'm not transferring data. The longer I wait before I resend, the lower the efficiency. So if there's a loss, I really want the timeout to be as short as possible. So that, okay, as a last timer expires, let's resend again and hopefully it gets there the second time. So we want the timeout interval to be short in the case there's a loss, so we are efficient. But we want it to be long enough such that in the case that there's no loss, we get a chance for the ACQ to come back. So we need to consider a trade-off large enough to get the ACQ back, small enough so that our efficiency is not too low. We don't wait for too long. So generally we try and, if we can estimate how long it takes to get an ACQ back, we set it slightly larger than that. There's no easy way to calculate in some cases. Maybe we can try and draw those cases, roughly draw what happens. Let's say we have a case, we'll consider two cases. One, when we send the data, we set the timeout to be too small. Normally it takes some time to propagate, the ACQ comes back and it's going to propagate back. The ACQ will arrive here, but if I start my timer at this time and if I set it too small and it expires here, what happens? I'll resend. When I didn't really have to, I resend, but if I waited a little bit longer, I would have got the ACQ back and I wouldn't have needed to resend. So this is the problem of, if you set the timeout interval to be too small, you'll retransmit when you don't need to. That's a waste. And the other case, if we set it to be too large, we send our data, in the case that that data is lost, we'd normally expect it to come back. Let's say we set the timeout to be very long, then it times out and we retransmit. In this case, with a very long timeout, we spend a lot of time waiting before we resend and that's very inefficient. We want to spend as much time as possible transmitting data. In this case, a long timeout can lead to lower efficiency. So ideally, we'd like the timeout to be a little bit larger than or at least the time it takes to get the ACQ back. In practice, we usually give it a little bit of freedom and make it a bit larger so that if there's some variation, we'll take that into account. But it should be at least the time for the data to propagate, the ACQ to be transmitted and to propagate back. Not too small, not too large. Any questions about the timeout interval? In some systems and in some protocols, it's defined in advance. That is, the value is given. In others, the software try and estimates the best value over time. So it measures how long it takes the previous one and then adapts the timeout interval to take that into account. But we will not go into those protocols. That's on one of the slides here, I think. How long should the timeout interval be? What's another thing that we skipped over? How related to both sliding window, stop and wait flow control and the ARQ protocols, how big should our data frame be? Another design issue. In these examples, we didn't say, but in previous examples, we gave an example, we gave a value of, I think we calculated 8,000... 1,000 bytes was the data frame plus 20 byte of header. How big should our data frame be? In general. If we go back... If you go back, maybe you have... I don't have the picture, but when we calculated even for stop and wait, we said we had 1,000 byte messages. A data frame contained 1,000 bytes plus 20 bytes of header. And we calculated the efficiency. And we said, what about the data frame? Should we make it smaller or larger for higher efficiency? With the data frame, regard to efficiency, should we make the data frame or the amount of data larger or smaller? If you go back to your notes, you should see that we said, larger data frame leads to higher efficiency. It doesn't matter if you're on this slide, but around here, we did some examples. And one of the cases was, okay, when it's 1,000 bytes, we got an efficiency of about 96%. Then we reduced the data frame from 1,000 bytes down to 100 bytes. We made it smaller and our efficiency went down. And the trend we said was higher efficiency is achieved when we have a larger data frame. So, and that applies with the others as well. Larger data frame can lead to higher efficiency. So, why not make the data frame a million bytes? We said, a larger data frame gives us a larger efficiency. Why not make it a million or 10 million or a gigabyte? Higher waiting time, who? The efficiency will be higher in the normal case. That is, if we have a larger frame, we'll get a higher efficiency with stop and wait. What are some problems with a large data frame? If you think of stop and wait ARQ, the problem with a large data frame is that we need to rescind a lot if there's an error. If we have, if we lose a frame, if we transmit a frame and it's lost, then we lose a lot all at once. And we need to retransmit a lot. But if we have a smaller frame and we lose just one frame, we only lose a small amount of data and only need to retransmit a small amount of data. I'll show you a slide that mentions that, but try and draw that. Let's say we have a data frame. We've got two options, one large data frame or maybe several smaller data frames. So we consider two cases. A case one where we have one big data frame and case two where we have four small data frames. Same amount of data in total. We're still sending the same number of bits, but we have different size frames containing those bits. Let's say that when we transmit that, that there's a small error, a single bit error in the transmission. And that occurs randomly. So at some point in time, maybe this is a bit different from our other diagrams, but think of time going in this way. That is, I start transmitting the frame. So I've drawn it on a different axis here. I start transmitting the frame and I finish transmitting here. And if I use four frames, I start transmitting here and finish transmitting here. I'm not gonna show it going from A to B. Just look at the frames. And let's say that there's one bit error randomly in that time. And I'll choose a random time point. Let's say randomly we choose a time and it's here. What I'm trying to show is that we transmit all our bits and at some point in time, there's an error on the link and just one of those bits sent is in error. And it's the bit at this point in time, this red line. When B receives, look at the top case. When B receives this data frame, what does it do? It ignores it or discards it because this data frame, the top one has an error in it. And we said if there's an error, there's an error detected, we discard that frame and eventually A will retransmit. So in the top case, we essentially lose the entire frame due to one bit. And what we retransmit, eventually sometime later, what will be retransmitted, well, that entire frame. Sometime later, A will retransmit that frame because it had an error in it. Now in the second case, A transmitted four frames. There's one bit error. What does B do when it receives these four frames? And to make it a bit easier, let's put numbers in them. B receives these four frames, frame number two has an error in it. So we'll discard frame number two and eventually that will need to be retransmitted. It depends upon the scheme, we'll see some other schemes later, but if we don't receive this piece of data, it will need to be retransmitted at some time later. So of those four frames, one of them, the one that contains the error, will need to be reset. What I'm trying to show is that with smaller frames, in the case of errors, smaller frames lead to less retransmissions, which is better. The second case is better because we only have to resend a little amount of data. In the first case, we need to resend everything. So in the case of errors, smaller frames are better. In the case of no errors, we've said larger frames are better because they can give us higher efficiency. Also, larger frames are better because the header is a less proportion of the total frame, a large amount of data. Another factor regarding frame size, let's say our receiver buffer, the amount of memory at the receiver is 4,500 bytes. That's the size of the buffer at the receiver. Let's see, given different frame sizes, how much of that buffer space we can utilize. What if we transmit frames which are always 4,000 bytes in length? How many frames can we fit in our buffer at the receiver? One, okay? We can't split them into smaller chunks. So if we transmit a 4,000 byte frame, our buffer contains 4,500 bytes. So we can fit at the receiver one frame and we effectively waste 500 bytes because we use up 4,000 of that buffer and the other 500 bytes, we've got nothing in there. What if we used a different size frame? What if we used a 3,000 byte frame? How many can we fit in the buffer? Only one at a time. And we still waste, in this case, even more. That is, we've got 4,500 bytes of memory but we only use 3,000 bytes at a time. It's no use having that extra 1,500 bytes. Try a smaller size frame. Transmit frames of 2,000 bytes each. How many can we fit in the receiver buffer? We can fit two at a time, wasting just 500 bytes. So that's better than the first and second case. We don't waste as much as the second case, the same as the first, but we can fit two frames. Same amount of data though, 2,000, 4,000 bytes in total. Try a smaller frame. Four frames we can fit in but we still waste some space in the buffer. So there's no use having 4,500 bytes if we never use that 500 bytes there. Try 500 bytes. How many can we fit in the buffer? Nine frames. How much do we waste? That's the best case. That is, if we have smaller frames for a given size buffer in general, we'll be able to utilize that buffer in a better manner. We'll waste less space. It's the idea of you've got a box and the smaller the objects you put in there, the more efficiently you can utilize that storage space, that box. So smaller frames are better with respect to using that buffer space that we have available. And the previous case, smaller frames are better in the case of errors, less to retransmit, but in the case of no errors, larger frames are better because we have higher efficiency, less overhead. More time transmitting. Somewhere there's a slide that says that. Here, what size frames do you use? In practice, the technologies usually impose a limit on the frame size. If you're using Wi-Fi, wireless LAN, there's a limit of the frame size, I think 1,500 bytes, or a LAN 1,500 bytes plus some header. Why do they have limits? What are the good values of the frame size? Well, larger frames, less overheads due to header. The more data in there, the smaller the header is compared to the total size. Smaller frames can utilize the buffer space better like we just saw. If we have to retransmit, smaller frames are better, less to retransmit. There are a few other reasons we will not cover this one, there's the third one yet, efficient sharing. So there's a trade-off. There's no optimal or best frame size. You need to consider these different factors. Large frame size and small frames, we can usually compare under different conditions to see which one is best for a particular scenario, but there's no one best value. So that two issues that are relevant for all of these protocols, flow control and error control. What's the frame size? And what's the timeout interval? What's a good timeout interval? Any questions on those two issues, general design issues? So some of those reasons are why when you say download a large file across the internet, that the file is not sent all at once, it's sent in chunks, in frames, or in general packets, send part at a time. We've got two more to go through, but they're quite similar to what we've seen already. Go back to stop and wait. So with error control, there are three general approaches. Stop and wait ARQ we've gone through. Then the next two, go back in and selective reject. They are similar to sliding window. Stop and wait, send one frame, get an act back. Sliding window, send a window of frames, get an act back or get multiple acts back. These two are based on sliding window and just small differences between them. Go back in, let's use the example. It comes up a bit small there. I have it bigger somewhere. This is go back in and the slide or the picture in the slide is slightly different from what we've been drawing it. We try to simplify instead of drawing the rectangles for the frames, it's just showing the timing of the exchange of those frames. Remember with sliding window, we can send multiple frames before we have to wait for an act. It's not data, act, data, act. Here we have multiple data frames. So frame where it says frame, it's a data frame. Zero, one, two, three, four and it sends multiple. And then acts in this case, they're called receive ready. B is saying I'm ready to receive more at different times. So this is the normal case, sending data frames, getting acts back. We wanna consider what happens when something goes wrong. Maybe we can write on there what's been received by B. B has received frame zero, zero and one. At this time it's received zero, one and two. Then it receives frame three, so zero, one, two, three. Then from B's perspective, a little bit later, it receives frame five, that's a problem. B has received zero, one, two and three. We expect to receive frames in order. We get zero, one, two and three and then we're waiting, we receive frame five, frame with sequence number five. That indicates something's gone wrong at B because if A is sending them in order, I should have received four next. But I get five. So that tells B there's been an error. And in this go back end protocol, what it does, it sends back a special act message saying, there's been an error, I'm waiting for frame number four. It's called a reject message. The name's not so important, but think of it as an act saying, I'm still expecting frame number four. I've got zero, one, two and three. I must receive them in order. Don't send them to me out of order. If there are errors, then you must retransmit. So, and what go back end does, is when B receives frame five, because it's out of order, it discards it, throws it away. And similar, a little bit later, B receives frame six. Also out of order, discards that frame. Why did it receive them? Because with the window in this example, A was allowed to send frame zero through to six. The window was seven, so it was allowed to send them, but one of them was lost. Frame four we see was lost. A didn't know that, but B detected something went wrong because it received five before it got four. So it sends back an act saying, I'm waiting for frame four, and it discards any subsequent frames, five and six in this case. So, when this reject message comes back to A, it realizes, ah, something's gone wrong. I have sent frames zero through to six, but B is expecting frame four. Therefore, I must go back and retransmit frame four, and five and six. I go back and retransmit N frames, where N is the number of frames since, and including the one that was rejected. So we see what happens. B is expecting four. That triggers A to retransmit frame four, five and six. Four, because that's what B is looking for. That's what it's expecting. Five and six, because A knows, since sending frame four, I've also sent five and six. Therefore, we'll retransmit those three frames. Let's go back N. When we have a sliding window, we keep track of the frames, and if we, in this case, B detects a loss, because it receives out of order, the source, A, will need to go back and retransmit multiple frames. In this case, five and six, as well as the lost one. Why? Well, because it keeps it simple for the receiver. Even though it did receive five and six, okay, it just ignores them. I must get them in order, and it doesn't need to buffer these two. So it only stores the ones that it's received in order. It discards five and six. The problem being is that A will need to retransmit five and six. So it makes it easy for the receiver, but it's a little bit of a waste in that we need to send five and six, even though they were successfully received. A variation of that is selective reject. We'll drop back to go back in in a moment, but let's just see the variation. B receives frame zero, has zero and one. Zero, one and two, then it receives three. Everything's okay. It's waiting, it's waiting. So this is selective reject, the next protocol. Four is not received, it was lost. It receives five. So what B does, again, it detects an error. I last received three, I received five, four's missing. So it'll send back a special act saying, effectively, please retransmit four. It's a reject message or a selective reject. I reject or I expect to receive frame four. But what's different from go back in is that we have zero, one, two, three. We're missing four, but we also have five. We buffer five as well. So we do store frames out of order with selective reject. It's a little bit more complex for the receiver because it must have some buffer space for this frame and keep track of the ones which are missing. It sounds simple, but in some implementations that adds too much complexity. And when it receives six, it's got zero, one, two, three. It's still missing four, five and six. When A receives that selective reject message, it knows, I'll need to retransmit frame four. So I'll retransmit frame four, but not five or six. So this is where it differs from go back in. We only retransmit the one that was rejected. And when frame four's received, then we have all of the frames in order. What is B expecting next? The next number is seven. So it sends back a receive ready or an act saying I now expect seven. So selective reject is more efficient in that we don't retransmit so much, but a little bit more complex at the receiver in that we need to buffer those frames received out of order and keep track of them. So that's the real difference between these two. Go back in and selective reject. I'll go back to go back in and then we'll see a couple of other points. Do we have it? Go back in. At this point, we have just zero, one, two and three. We've discarded frame five. When we receive six, we don't do anything, just discard it. So we just have zero, one, two and three. When A receives the reject message, it realizes it needs to retransmit four as well as subsequent frames it's already sent, five and six. And we'll receive four, send receive ready waiting for five. Then we eventually receive five and then six. And send a receive ready saying I'm waiting for or an act saying I'm waiting for frame number seven. So that's the same point as selective reject. Go back in, more retransmissions but simpler, selective reject only retransmit the lost message but more complex. In this case, when we have the window, the receiver can detect an error. See, by receiving frames out of order, B knows there's been an error and can send this special act back saying really there's been an error, this reject message. On this example, the response messages are called reject messages, receive ready messages. And in selective reject, there's also a selective reject message. But in general, they're all acknowledgements. They're all just acts coming back using different act numbers and having different meanings. This example considers, what if the act is lost as well? Similar to stop and wait, we have a timeout. If we don't receive an act within time, we realize something's gone wrong. So in this case, A has transmitted five, six, seven. After transmitting each frame, it sets a timer. If it doesn't receive the act in time, that'll trigger a timeout and trigger it to, in this case, slightly different, send a special message to B saying, what have you received? A received request. And then it sends back an act. So if we lose an act, instead of retransmitting data, we send a special message to B saying, please send me an act again. That's what the meaning. And it sends the act again. And once we get the act, then we know at what number we're up to. We see that, okay, act with sequence number seven was lost. We timed out and that triggered this special message to be sent. Don't worry too much detail about the P-bit. This is just a message saying, please send me an act. Let's see what, just finish this example. What has been received? Zero up until seven has been received. Zero up until seven. And then the next zero has been received at this point, the data frame. A hasn't received an act for some time, so it times out. It asks B, please send me an act again. Which is this one. Saying what are you expecting? And B is expecting frame with sequence number one. Therefore, the next frame sent is sequence number one. And then we receive the frames received all the way through to two. And it would keep going. The main thing I want you to pick up from go back in is, and also selective reject, is how they detect that they've lost a packet at the receiver and what is retransmitted. So this is the part I want you to understand. The lost act is not so common, but lost data, and the difference between go back in and selective reject is important to know. Lose the data triggers a special act saying, please send again, send four, five and six again. In selective reject, lose the data. In selective reject, we lose the data. Please send, this is B telling A, please send frame four again. And frames five and six are buffered. Once we receive frame four, we've got everything back in order and we can move on to frame seven and so on. That's the main points of these two and the differences. The lost act not so important because it's not so common. Any questions on go back in and selective reject? Just understand that they use a sliding window and they have retransmissions in different manners. Any questions? Where do we get to in our slides? These describe how go back in works. Also about this special act that comes is when we lose an act. Again, just understand the difference between go back in and selective reject. That's the main point there. The details are not so important there. Selective reject minimizes retransmissions compared to go back in, but the destination needs a larger buffer and it's more complex at the destination. So that's bad. Turns out because of that extra complexity, go back in is more widely used in especially simpler devices. One thing that we've missed and it comes up in these slides and we've seen it before, haven't explained it. What is the maximum window size that we have available? There you go, just remember that. In sliding window, including go back in, two to the power of K minus one where K is the number of bits in the sequence number. I think it's on a slide somewhere, but if we have a K-bit sequence number, we're almost finished just to summarize. When we've got a K-bit sequence number, what's the maximum window size that we should use? Remember in sliding window, we used an example of, I think, the window size was seven. We had sequence numbers from zero through to seven. There's a three-bit sequence number. The maximum window size was seven. Well, the general rule is that with sliding window and with go back in, sliding window flow control, go back in error control, the same mechanism, the maximum window size is two to the power of K all minus one. So if K is three, it becomes seven. If K is four, it becomes 15. So that's the maximum window size. You can have smaller, but generally we set it to the maximum. With selective reject, it's slightly different. It's two to the power of, and the exponent is all K minus one. It's slightly different. So if K is three, with selective reject, what's the maximum window size? If K is three, then selective reject, we should have a maximum window size of four. Two to the power of two. If K is four, then the maximum window size is eight for selective reject. It's smaller than the other two cases. It's due to the way that the retransmissions and we keep track of the sequence numbers. If we set it larger than these maximum values, we'll get some other problems with the protocols. It's nice to study them, but we do not have time to explain why it's those cases. Go back in and sliding window, two to the power of K minus one. Selective reject, two to the power of all K minus one. We've done timeouts. Where are they used? You can have a look. Many protocols in many different networks and links make use of these concepts. Wireless LANs, LANs, dedicated links, say from your home ADSL router to your ISP, older links between devices. And in fact, the concepts of these protocols are used in the internet every day. When you download a file, you send an email, you use TCP as a protocol and TCP uses these mechanisms. We'll mention that protocol towards the end of the course, but flow control and error controller widely used in the internet and in communication systems. That ends this topic and next week we'll move on to the next topic of multiplexing.