 Last week we introduced and went through the details of how the wireless LAN DCF basic access works. It's how we transfer data in our data frames using the MAC protocol in Wi-Fi, wireless LANs. And we saw the general operation, if everything works well, the concept is to make sure no one else is sending. And if no one else is sending, you send your data. If someone else is sending, wait for them to finish and then attempt to send your data. Of course for that to work, we want to check that no one else is sending, not just instantaneously but for some period of time. So there are some time periods defined. We check, is anyone sending for this diffs period? If not, okay. And then we've got another check, is anyone sending for this back off period? And we'll see today and we saw slightly last week why that extra random back off period can help us. Then we send our data if we detect that no one else was sending. And then when the receiver receives the data, they wait a short time, this short interframe space, and then send back an act. So we saw an example and we finished the lecture with a long example on the board. That one example that's on the board, the file or the picture of that is on the website. So you can look at the picture that I drew on the lectures last week. In that example I used 802.11b because I had the values. In the quiz, those that have done it, you'll see that the questions are all using 802.11g. It's the same approach but just different numbers. So you just get different absolute values but the operation is the same. What I'm going to do today is go through some more examples. Some of you have seen them because they are examples from the quiz. So we're going to go through the examples from the quiz which have used 802.11g. And I'll give you some handouts in a moment. Rather than you having to copy down on the board, I'll give you the handout that has those pictures in it. In fact we'll do that now and take one pass along. What's missing? That's mine, that's why. Take one pass along, there are six pictures there. And they're also on the website, you can see in the nice coloured version, but this should be sufficient. And that's the first one that we're going to look at. And again, if you've done the quiz, you may have seen these. If you haven't done the quiz then you've basically got the answers to the quiz in front of you. But you still should try yourself. But we need them today so we can get through and finish before the exam. So we see the basic operation. So this is the one you have in front of you on the first page, labelled number one. If everything goes fine, when A wants to send data to B, check if no one's sending for divs back off for some random number of slots. In this case it was 15 slots. And then send the data. And then wait for a short interframe space and then an act comes back. What I try to show on these diagrams is the activity on the channel in the medium. Because we care about if someone else is transmitting. So I show in this smaller rectangle here indicating someone is transmitting. And from B's perspective, at that time the medium is busy. So the medium is busy from both their perspectives at that time. And of course it's busy because in fact B is receiving. So A is transmitting, B is receiving in this case. We'll see when we add C and D. It's useful to keep track of is the medium busy or idle. So we know what action to take. And the numbers that I use to calculate the timing in this case. And again it's using 802 to 11G. Same procedure, just different values for divs. The short interframe space, slot time and the contention window. Recall that the back off is a random number R. We choose a random number which is between 0 and CW. Called the contention window. And whenever we send a new frame, CW is initialized to CW min, the minimum value. In this case for 11G it's 15. We saw last week with 11B it's 31. And we'll see a little bit later that CW may change. It may go up to a maximum value. Rather than having to calculate the values now. So divs is 28, back off is R number of slots. Slot time is 9. In this example the data transmission time I've calculated before is 168. Micro seconds. And the ACK is 20. Note that I say that there's a data rate and a control rate. The data rate is the rate at which we transmit our data frame. The control rate is the rate at which we transmit control frames. In this example an ACK frame. The idea of sending some of the frames, the control frames at a lower speed. Is that when we use a lower data rate, a lower speed. Our signal can be picked up easier by stations further away. So if I'm using a data rate or a transmission rate of 54Mbps. Let's say my range is 10m. If my range is 10m for 54Mbps. With 6Mbps my range is going to be further, let's say 20m. The idea is that some of the control frames, ACKs. We'll see some others later, RTS, CTS and also applies for beacons. It's not just destined to one station. It's also useful if other stations receive the control frames. And stations further away. So often to get the control frames to more stations than just those that can receive data, they're sent at a lower data rate. So more stations can receive them. And the other numbers here, for example, the random numbers I use in the examples. That one's easy. We've seen that in the lecture except with different numbers. On the next one we introduce a third station C. And in this case A does the same thing except C also wants to send some data. And this is where my first mistake in the quiz was. The idea is that C wants to start sending data at time 180. Means on computer C the application, for example, has generated some data to send to some destination at time 180 microseconds after A. And as a result has to defer. And we saw in this example last week where it waits for the medium to become idle, tries to start the diffs but it becomes busy again and then waits, does the diffs back off data and the ACK eventually. So that's the second case. We've seen that last week just with different values. Third case, hard to see here but a bit easier on the paper in front of you. Four stations, same as the previous scenario. But now station D also has data to send at time 350. And at time 350 station C had data to send at time 180. Has to defer and then gets to send. Station D has some data to send. It will have to defer. And note what happens with C and D. They both enter the diffs at the same time. So they both start their diffs at time 361 in this example. That's the important part here. They both start diffs and because it's the same for both stations, 28 microseconds, they'll both start their back off at the same time. And the back off, they choose a random number. And here we have what's called contention. Contention is some competition. The idea in the wireless LAN Mac is to only have one station transmit at a time. And how they do that is they have this contention mechanism where they compete to see who transmits. Because here we have two stations wanting to send. C and D. Which one sends? Remember these two computers are operating independently. They don't communicate with each other to work out who will send. They simply follow the same algorithm. And the idea is that the random back off will determine who gets to send first. Because the station that chooses the lower number of slots, the lower random number, will have to wait less time before they can send the data. In this case, C chose 3 as a random number. D chose 11. So C waits 3 slots and then sends the data. D is waiting. The total of 11, it's waited for 3 slots but then realizes someone else is sending. So now must defer. So that's how C wins the competition. It wins the competition to see who gets to transmit first. And that's the purpose of this random back off period. To make sure that both of them don't transmit at the same time. Because if two stations choose different or choose random numbers from some range, it should be unlikely that they both choose the same number. And recall the range in this case is from 0 to CW, or specifically between 0 and 15. CW is 15 in this case. So C chooses a random number between 0 and 15. D chooses a random number between 0 and 15 at this point to determine the back off duration. What's the chance that they choose the same random number? Some have done the quiz. Others? So the chance that they both choose the same random number, if two stations choose between 0 and 15 inclusive, station C, the possible value is 0, 1, 2, 3, up to 15. D, the possible values, up to 15, they are the possible values. So look at all the combinations. There's 0, 0 is one combination. 0, 1, 0, 2, 0, 15, 1, 0, 1, 1, 1, 2. So there are 16 by 6 combinations in this case. When the two can choose between 16 values independently, there are 256 possible pairs there. Which ones are combinations or pairs with the same value? Well, if it's 0, 0, the same value. 1, 1 or 2, 2. So there are 16 pairs which have the same value. So the probability that they'll choose the same pair is 16 out of 256. Which is 1 out of 16. So some basic statistics says that if we're choosing our random numbers using uniform distribution between 0, in this case 15, that is of 16 values, the chance that two of them will choose the same value is 1 out of 16. And the chance that they will not is, of course, is 15 out of 16. So the probability that they choose the same value is quite low. Whatever 1 out of 16 is as a fraction. In this case, they chose different numbers. Most often, in the most cases, they'll choose different values. So there's still a chance that they will choose the same value. It's just random. So there's a chance that, say, every time they do this, if they do it 16 times, one of those times on average, they'll choose the same values. And that's what the next picture shows. In this picture, everything's okay because C gets to transmit first, sends the data. D will have to defer and then get to send. So C won the competition to transmit first. And then eventually D gets to transmit. If you look overall at the diagram, we see A is sending, then C, and then D. That's one station is sending at a time. The next one, identical scenario, but I've changed one thing. It's hard to see here, but C and D choose the same number. They both, by chance, choose 3. So this 1 out of 16 case. In this case, 1 out of 16 times, they'll choose the same value on average. For example, both chose 3. So C chose 3, D chose 3. So they both back off for 3 slots. And in theory, they both start transmitting at the same time because the rule is, when you finish your back off, send your data. So they both transmit the data. The two computers are transmitting. They're both transmitting to B in this scenario. And it results in a collision at B because two stations are sending to B at the same time. And that's bad. That's what we want to avoid. Collisions are bad for performance because we see it takes a long time to get the data to the other side. It reduces throughput and increases delay. So what happens when there's a collision? They both send their data. From B's perspective, it's received signals from two transmitters. Those signals overlap and B cannot understand either of them. Those signals interfere with each other. So B doesn't know any of the data that has been transmitted by C or D. Therefore it cannot respond with an act because it hasn't successfully received the data. So it does nothing. B doesn't respond. Both C and D, after sending the data, they expect to receive an act. That's the rule. You send data, should be a short interframe space and then receive the act. That's what happened in the previous case. Send the data, short interframe space, act. But now, since there's no act coming, we have another parameter in our protocol which is a timeout period. A timeout for the act. We wait some time. If we haven't received the act within that time, assume something went wrong. That is, there was a collision. And that's shown here. Both C and D do the same thing because C's waiting for an act. D is waiting for an act. They're waiting, they're waiting. They have some timer running. Once that timer reaches a specified value, they give up waiting and they try a retransmission. And a retransmission, we'll see shortly, goes through the same steps as a normal transmission. So we've got a new concept here, this act timeout. What is the value? In this example, it's very small here. The value I set is 35. It's not fixed. It depends upon the implementation. Is it possible to send data in here? Possible. That is, let's say there was E. And by chance, they wanted to do the diffs here and then the back off, except the act timeout usually is smaller than the diffs. In our case, it's close to, the diffs is 28. The act timeout is 35. If I set the act timeout to be smaller, let's say 20, then even if someone tried to send in here, they wouldn't get the chance because, oh no, sorry, you're right. Another station can start a diffs here, because there's no one else transmitting. So if there was a station, let's say A wanted to send data here, it would start diffs back off the same as before. Because it's as if this is finished. This data transfer attempt is finished now from their perspective. A new one starts. It's just that the new one is the same data as we sent before. And there's one more thing that changes in the new one. So the back off is important to avoid collisions. If there was no back off, if the back off was zero here, we'd always get these two stations sending data at the same time and getting a collision. So if there was no back off, it would be data, data, collision always. So by introducing this random back off, only in some cases may there be a collision. In most cases, they'll choose a different value and one will transmit first and the other will have to defer. So that's the role of the back off in this case. How do we reduce the chance of collisions? I don't want collisions to occur. They're bad for performance. How do I reduce the chance of collisions? Increase this value, CW, the contention window. We said it's set to CW mean a 15. That is we choose a random number between zero and 15. 16 possible values. So C chooses from 16 values. D chooses from 16 values. There's a 1 over 16 chance that they'll choose the same. Well, try something different. Let CW equal to 31. They choose between zero and 31. The chance of a collision, you can work it out, will be now 1 over 32, much less than 1 over 16. So the way to avoid them choosing or reduce the chance that they choose the same random number and therefore cause a collision, increase CW, larger CW, less collisions. What's the problem with increasing CW? Long time. The problem with, or maybe come back simpler, what's the problem with the back off? The back off is sometimes spent waiting, not spent sending data. The longer the back off, the longer the time or the less efficient we are because going back to just our first simple case. In this simple case, between zero and time 361, we sent one piece of data. If the back off was not 15 but 30, then it would take longer to deliver the same amount of data. It would be less efficient. So the larger the back off, the less efficient we are in sending the data. So therefore the larger CW, the less efficient, but the larger CW, the less chance of collisions. So there's a trade-off there. Back to, this is the fourth one on your handouts. We see by chance, we both time out, both start the diffs, and we choose a new back off now. We choose a new random number, R. And then we do the same procedure. Once we finish the back off data, if someone else is sending defer, SIFS, ACK, and then the station C will get a chance in this scenario. There's one thing that differs here. This was the original data. This is a retransmission of the original data. Whenever we have to retransmit the data, we increase CW, and we effectively double it. And that's on one of the slides we have shown here. This slide shows an example for how CW, the contention window, increases for retransmitted frames, but it's for 11B. So in 11B, we start at 31. And the example we just saw, we started at 15. But the concept's the same. For the original transmission of a frame, we set the contention window to the minimum value. Let's say 31 in this example. When we need to retry that frame, because we had a collision, we will effectively double it. Not quite double, a bit more than double. So we see the equation, do we have any equation? Double plus one in this case. So it's, in this case, 2 to the power of 5 minus 1, 2 to the power of 6 minus 1, and so on. So 30 doubled plus 1, and that's the value of CW. The value used in the random number selector. So our back off, we choose a number r randomly between 0 and CW. For the first frame, CW is CW min. If we have to retransmit that, we increase it to approximately double that, 63 in this case. The reason being is that we've just had a collision. Our first original frame, there was a collision, an error. We don't want another collision. So what we do is we increase CW, because increasing CW reduces the chances of a collision. But of course, it also increases our back off, which is a problem. So we increase it to try to ensure that the second try, the first retry, doesn't collide again. If it does, by chance, that is C and D transmit and cause another collision, then we double it again. So it's even a much smaller chance for a collision, but here we have 1 over 128 is the chance of a collision. And again, we keep increasing it. If we retransmit, there's a collision, double, until you get to the maximum and just don't go above the maximum. That's CW max. So there's some limit there. So this is a case, and our original frame, there's a collision. We have to retransmit. And then there's another collision. We have to retransmit and keep retransmitting. Seven retransmits. What do we do on the eighth retransmit? That is, we've had eight collisions all in a row. We give up. Our computer would say, I cannot send the data. Something's wrong. Either there's too much traffic on the network or I've lost the link or something's gone wrong and the wireless LAN card would report some error to your operating system and maybe your application. So maybe it may come all the way back to your application. Your application will not be able to send the data. So the user would get some error. So there's some limit as to how many retries we attempt. It differs depending upon devices. Normally it's set to seven, but in fact you can change it in some devices. It's a parameter of the MAC layer. So it may be lower if you want to give up earlier or you can make it larger. But the idea is that the contention window starts at a small value, but as we have collisions, we increase it effectively doubling it until we get to the maximum contention window size. And then it remains the same. Once we get a data, yeah, double plus one. So when I say double, it's approximately double. This is the exact approach. So it's two times plus one. Two times the previous value of plus one. So 31, 63, 127. If it was 11G, it would be 15, 31, 63, 127 and so on. So I think on one of the other slides, the formula may even be there. Sorry, maybe not. So CW, the previous value plus one. Double the previous value plus one. Approximately doubled. So again, in summary, the idea, small CW will mean a smaller back-off, which will be more efficient if there are no collisions. A larger CW reduces the chance of collisions. So that's the trade-off that's here for what's the optimal value. And the design is, well, let's start small. If we get collisions, we don't want any further ones, so increase it so we further reduce the chance. Other than changing the value of CW for a retransmission, we follow the same steps as an original transmission. That is, after the time-out, we do diffs, choose a random number of slots to back-off between zero and CW, back-off, send the data, short interframe space and an act. So from here to here, same procedure. It's just CW has increased. And similar for C in this case, this example. Any questions on the details of the basic access? We want to avoid collisions because they're bad for performance. And the contention window is one way to manage that collision avoidance. Before we move on to the next cases, let's do some simple performance calculations. First, let's look at delay, and we'll go back to the original case. Actually, the second case, picture two in your handouts. Station A, how long did it take to deliver the data? So in this case, 361 microseconds. Everything's microseconds. Station A, it has data to send at time zero. It's finished that data transfer at 361. If it has more data to send, it just repeats the process, but of course it may have to defer if someone else is sending. So in this case, 361, it took to finish. Actually, even we can go back to the simpler case. That's all this one is. What if there's only station A and B in the network, and only station A wants to send? Each data frame contained 1,100 bytes of payload. Payload is the amount of real data inside the frame. It contains payload plus header and trailer, in fact. For simplicity, they were all the same in these questions. Let's say A has 10 megabytes of data to send, A to B. No one else is transmitting. There's no other stations in the network. Then what happens is that A starts the diffs back off, sends a data frame, act, and then can repeat that process. Diffs checks if the medium is idle. If so, back off, send the data frame, the next one, act, and then keep repeating that process. From that, we can measure the throughput or approximate or throughput that can be achieved for delivering that payload to station B. Because we see every data transfer, if we chose for simplicity the same value each time, 15, every data transfer would take 361 microseconds. Every data transfer, we deliver 1,100 bytes in 361 microseconds. And from that, we can get the throughput. Because we just take bits, right about microseconds. Anyone remember or know the answer? If someone's got the quiz, it's approximately 30 megabits per second. Simply 1,100 bytes divided by 361 microseconds. So that's the throughput in this case. Assuming the back off was always 15 slots. That's not true in real life. But just for simplicity, so we can calculate, if it was always 15 slots, then every data transfer would take 361 microseconds. So no matter how many frames we have to send, 10 megabytes, 1 gigabyte, assuming there were no other stations, then we'd get a throughput of approximately 30 megabits per second. That would be the worst case because choosing 15 is, if we chose between 0 and 15, 15 is the longest back off. What's the average back off in that case? Let's say A has 1,000 frames to send. What's the average back off it will choose? Remember R is a random number between 0 and 15 in this example. What's the average value we'll choose? If you choose a random number 100 times between 0 and 15, it's 7 1⁄2. The average value of R will be 7 1⁄2. If you don't believe me, then choose a random number with your calculator and add them up and then divide by the number of occurrences and you'll get that. So on average, I'd have to wait for 7 1⁄2 slots. Of course, we never wait for a fraction of slots, 7 1⁄2, but sometimes we wait for 0, 1 slot, 2 slots, 15 slots. So on average, 7 1⁄2. So in fact, we could calculate the average throughput in this case. It would be the diffs plus 7 1⁄2 times the slot time, which is, what do we get? Diffs is 28 plus the 7.5 times the slot time, which is 63, 67 1⁄2 plus the data, which from memory was 168 plus sifts plus act, something like 293. Where does 10 megabytes come from? Let's say, it doesn't have to be 10 megabytes. Let's say I've got many data frames to send, 1,000. Each data frame is 1,100 bytes of payload. I have many of them to send. So the point is, if I have many to send, we just keep repeating this. 0 to 361. 361 to 722. And just keep repeating and repeating. So when I said 10 megabytes, the point was, assume there are many data frames. 10 megabytes, 100 megabytes, it doesn't matter. Why many? Because really, I don't care about how long it takes one frame to get there. From a user's perspective, we usually transfer a lot of data. So we need to look at the average performance across all frames. And to look at the average, we need to know what's the average number of slots we back off. Diff's is always the same. Assuming the payload size is the same, the data transmission is always 168. Siff's is always 10. The act transmission is always 20 in this case. The back off, sometimes it's 0 times 9, 1 times 9, 15 times 9. On average, it's 7.5 times the slot time of 9. So on average, if we've got many frames to send, each one would take 293.5 microseconds. Someone can correct that. And I think before I wrote 30 megabits per second, that may have been wrong. So now, 1,100 bytes, every 293.5 microseconds, that gives us approximately 30 megabits per second. Just check. We have 1,100 bytes, 8,800 bits. And if every... So that's the time to do Diff's back off data Siff's act. And if that takes 293.5 microseconds, that is our throughput. Let me get the value on the screen. 29.98 megabits per second. Because we divide it by microseconds. Yeah. All right, let's do this one, because it's slightly different. I said 30 before, but I think it's wrong. It's our 8,800 bits divided by 361. 24.3. Slightly different. It's very different. Or 23.4 megabits per second. This is if the slot time was always 15. 15 times 9. But in fact, we don't always choose the same random number. On average, we'd choose a slot time. The average value would be 7.5. So this is more realistic. This is the worst case scenario. So if you're transferring a DVD or just a large 1 megabyte file even across wireless LAN, assume no one else is transmitting. If you're using 11G and the same parameters, then the throughput that you can achieve is no more than 29.30 megabits per second. So you've got a 54 megabit per second data rate. The throughput, 30 megabits per second. Any questions about what we've gone through in those calculations? So now we're moving into some performance measures. The throughput, the rate at which payload is delivered. Similarly, you can calculate delay. Delay is the time it takes. So in this case, the delay of one data frame or the time between data frames is 361. But not so important in this case. The total time to transfer is related to the throughput or proportional to the throughput. That's why with your Wi-Fi, your wireless LAN, this is one of the main reasons why you'll never get your 54 megabits per second. You'll always get around no more than 30 megabits per second in the same scenario. Usually less. So all we really did is looked at the payload size divided by the time it takes. And that was a case when only A was sending to B. You can do the same sort of analysis if there are other stations. You look at the total time and how much payload is delivered. Of course, the longer the time, the lower the throughput. Because the throughput is simply the payload divided by the time. So as the time goes up, the throughput goes down. Why does the time go up? Because we may have deferrence. In this case, we don't. That is, if we look at the network throughput from A's perspective, start at 0, finish at 361, and then C, start at 361, finish at 614. In this case, this specific case, we've delivered two data frames, 2,200 bytes of payload, in 614 microseconds. We could calculate the throughput there. What if there's a collision? If there's a collision, the total time is even longer. We see we start at 0. Station A, it's okay. Station C, start at 361, so does D. They don't finish until 1,152. So the total time to transfer the data in this case is much longer. Let's do a rough calculation. Here we have no collision. How much payload is delivered in total? This is example three on your handouts. How much payload is delivered to destinations in total, in the network, let's say? 3,300 bytes, because there are three data frames that have been successfully delivered. So the total payload, 3,300 bytes, how much time did it take? From 0 up to 912. You can calculate your throughput from that. The network throughput. Convert to bits, I'll calculate. We've got our 3,300 bytes. Convert to bits, divided by the total time was from 0 to 912. 28.95. Of course, it depends upon the back-off values chosen. In this case, they chose 15, 11 and other values. In other cases, it may vary. This 912 may be different. That was with no collisions. The next case is if there was a collision between C and D. The second case, how much payload is delivered in this case? It's still 3,300 bytes. Delivered. The throughput is the rate at which payload is delivered to the destination, not sent. A delivers its payload to B. There's 1,100. These two payloads are transmitted, but B doesn't receive them. It was a collision, remember? So nothing is received in this case. That's why we had to retransmit. This one was delivered to B, success. This one was delivered to B. So still, the total payload delivered is 3,300 bytes. How long did it take? The answer's at the end. 1,152 microseconds. Of course, it took longer because we had to retransmit. So what's the throughput? 1,152. 22.9 megabits per second. We see this reduction in throughput because of that collision. We spend some time not delivering successful data, or more time not delivering successful data. Retransmitting reduces our throughput. Of course, looking at just three frames in a network is not very statistically accurate. Because in a different case, there may be different back-off values and different number of collisions. You should look at hundreds of frames over some period of time, thousands of frames, and to get a statistically accurate value of throughput. But still we can see the idea. The longer it takes to transfer our data, especially due to collisions, the lower our throughput. Any questions on the throughput calculations before we move on to the next case? Done the quiz? Not yet. Of course, in these cases, CW is initially 15. If it was larger, then the back-off would be larger and the time would be larger on average, and lower throughput. So larger CW, lower throughput, but less chance of collisions. And collisions cause lower throughput. So there's a trade-off there. If you're unlikely to get collisions, that is, if there's only one station in the network, you don't need a large CW. But the more stations wanting to transmit, the larger the CW, the better. The contention window. Here's another case. This is the fifth one in your handout. Back to A, B and C. A wanted to send data. It started a minute time, and 80 C wanted to send data. Those are diffs. In this case, A and C are outside of range of each other. An example. Let's say B is our, as an access point. And it's in the middle of a building. And the range, the transmission range of our wireless devices is 10 metres. Okay, so to communicate with this access point, you need to be within 10 metres of it. And let's say station A is here. A client, a laptop. And the distance is 9 metres. Okay, so it's within range. Fine. A can communicate to B, within range. And then C is here. Another client, a laptop. And the distance is 8 metres. Within range of the access point. But A and C are outside of range of each other. What that means is when A transmits, B can hear and successfully receive that. But C cannot. Because it's too far away. If we looked at the signal, when A transmits a signal, it would be strong enough for B to receive and process. But it gets too weak, because the signal reduces in strength as it travels across distance. And when it's received at C, in fact, the signal is so weak that C doesn't even see it as a signal. It's just some random background noise. So if A and C are outside of range of each other, our range was 10 metres, whatever A transmits is not received by C. And the same in the other direction. Whatever C transmits is not received by A. And this is important when we apply the MAC protocol, basic access, because who we can receive depends upon when we sense the medium to be busy or idle. And that's what we get in this case. A starts divs, backs off, start transmitting data. It's transmitting data between time 163 and 331. In the meantime, at time 180, C starts. It's got some data to send. Divs takes us to 208. It has a back off in this case of just three. So it brings us to 235. During divs and back off, it senses the medium. And it's checking, is anyone else sending? Is anyone else sending? Even though A is transmitting, it's too far away for C to receive that signal. So C doesn't know it's transmitting. So it senses the medium to be idle. The medium is idle, so we finish divs, start the back off. It's still idle. We finish the back off. Since the medium was idle, we get to transmit our data. So C transmits the data to B in this case. So A is transmitting to B from time 163 to 331. And C is transmitting to B from time 235 to 403. We get a collision. From B's perspective, for some of that time, those transmissions overlap. Even though it's only a small portion of the time, normally that means that B cannot receive or understand either of those transmissions. There's a collision, and both data frames are lost. It cannot be processed by B. And we don't like collisions. We want to avoid collisions. So there can be other scenarios. So let's say A receives the data and starts sending an act back. In that case, C would hear the act. So another way to draw this is remember, C cannot hear what A sends, but C can hear what B sends within range. So if, for example, the data was received and B starts sending the act, and then C wanted to send, C would defer. It would sense the medium to be busy in that case. And it would work. It's only the case where the two stations are outside of range from each other. They're what we often call they're hidden from each other. They're hidden stations. A is hidden from C, and C is hidden from A. And we get this problem which is called the hidden station or hidden terminal problem. Terminals or stations, same meaning. Of course, it results in a collision. And the collision means there's no act coming back. A times out starts to retransmit by diffs back off, and C times out and does the diffs and back off. In this instance, they are lucky in that there's not another collision. We see what happens is that because of the timing, A gets to transmit the data. While A is sending the data, C is backing off. Only from luck in this case. Only because C chose a large back off 27. Why did it choose a value of 27? Remember, in the first instance, it chooses between 0 and 15. After a collision, it chooses between 0 and 31. So there's a chance of getting a higher value with the intention of trying to reduce the chance of a second collision, which it does in this case. And we see that what happened, we just mentioned before. A is sending the data to B. B sends back an act, and C receives that act. It senses the medium to be busy at that point and defers because they are within range of each other. And then diffs back off data act. So here we got a collision not due to the same random number, but due to what we call hidden terminals, hidden stations. A and C were hidden from each other. And of course, we want to avoid that because it increases the total time to transmit our data, to increase it. In fact, it didn't increase in this case only because... Why didn't it increase? Because I think of the back offs that I chose. Ah, yeah, because of the back off here of 7. Generally when we get a collision, that the total time will increase. Oh, it would compare to the other case. In the other cases, we consider we had three frames. Here's just two frames. So we have one, two data frames, 2200 bytes delivered in 908 microseconds. So I think you'll see the throughput is lower than the two that we calculated because there are just two data frames. So two main reasons for collisions. Choose the same random back off. We can avoid that by increasing the contention window. The second happens when our network is such that we have stations or terminals hidden from each other. Too far away. Like an access point in the middle of a building. Some clients on one side and other clients on the other side. Just within range of the access point. But too far away from the other clients. To avoid collisions due to the back off, we increase the contention window. How do we avoid collisions due to hidden terminals? We use a new scheme called RTS-CTS. So back to our slides for the last 10 minutes. We've gone through this. That is, we've gone through basic access. Several examples. We've gone through what happens if we don't receive an act. We have some time out. And then we repeat. We retransmit. But we increase the contention window. In this case, it's effectively doubled. And in fact, we keep increasing the contention window. When we have a new frame, once we're successful, we revert back to the minimum value. And we've discussed the purpose of that. And we just saw there's a problem. There's a problem if we have hidden stations or hidden terminals, like shown on the board and here. Two clients are within range of the access point, but outside of range of each other. So the green one is the transmission range of the client A. Inside of the range of the access point. But A and B are outside of range of each other. And we get this collision here like we saw in the example before. How do we fix this? Then we, before we send the data, we try to inform other stations that we're about to send data. Currently, in basic access, when we've got data to send after the back off, we send the data frame. One thing we can improve is, and to avoid hidden stations, is to inform as many stations as possible that we're about to send data and then send the data. And we do that using a different scheme called RTS-CTS. So far we've covered the part called basic access. There's an alternative called RTS-CTS, which means request to send. We've got it, request to send and clear to send. So what happens is that A first requests from the access point. Can I send? If the access point knows that no one else is sending, then it will send back, yes, you're clear to send. With the idea that both of those requests to send and clear to send messages will inform other nearby stations that a transmission is about to happen. And then they send data. So we've got two new frames. Now we have RTS-CTS, data and AC. RTS-CTS are usually small frames about the same size as an AC, an acknowledgement. Let's just go through the basic operation to finish today. Okay, so we've got RTS frames, request to send frames, about 20 bytes, it depends on the fields. RTS-CTS and frame also small. Do you have a picture? This one's too complex, the one in your handout's easier. This is the figure six in the handout. A and C are hidden from each other. Both A is within range of B, C is within range of B, but like in the previous case where we had hidden terminals, A cannot hear C. But this is using RTS-CTS. So we have the same procedure, diffs, back off, but now the difference. Instead of sending the data frame, send a special request to send frame. No data. Send a request to send to B, saying I want to send data. B receives that, waits a short interframe space and should respond with a clear to send, saying you're clear to send data. You're allowed to. A receives the clear to send, it waits a short interframe space and then sends the data. SIFS act. So the new part here is before we send the data, we have this exchange of RTS, SIFS, CTS, SIFS. That's the new part compared to the basic access. How does it help us? When we send the RTS from A to B, C doesn't hear it. It sees at time 180, wants to send some data, starts the diffs, but note when B responds with a clear to send, because C is within range of B, C does hear the clear to send and that clear to send is telling C, someone else is about to send. Wait. It effectively tells C to defer until the other data transmission is finished. So even though C cannot hear the other data transmission, this CTS has a special field inside that says someone else is about to send for 208 microseconds. Wait for 208 microseconds before you retry. That is diffs back off. As a result, we don't get a collision and we can improve the time to deliver our data. The total time is down to 734 in this case. So diffs back off, RTS, SIFS, CTS, short interframe space, data, SIFS, PAC. With both the RTS and CTS contain a field in the header saying the duration that this data transmission is about to take. Because A knows how long it will take if everything works well. It knows that it should be SIFS, CTS, SIFS. It knows the length of the data. So sets the duration field from here to the end should be 238. And when B receives that, it knows from the end of the CTS to the end of the act should be another 208. So the duration of how long they expect it takes to deliver the data. And that's used by C to say someone else is sending for 208 microseconds. Let's just defer. Not even try. And then I'll try again. Diffs back off and so on. RTS, CTS is good in the event that we have hidden terminals. Because even when they're hidden the hidden terminal C becomes informed that someone else is about to send. And therefore a collision doesn't occur. Yep. Okay, the case if A sends RTS, B doesn't send the clear to send, then A will not be able to proceed onto the data. It will treat that as an error and will have to try again and do a retransmission. So it will have to do a diff, so back off and try again with the RTS. So if we don't get the... we have to retransmit. If we don't receive a CTS then we cannot proceed. We'll have to try again later. And similar, if something went wrong and we didn't receive the data, then again, well, actually that wouldn't be a retransmission. But we need to get the entire process completed. So question, what if they both send RTS? A sends RTS and C sends RTS here. Those RTSs will collide. That's a problem. Same as... effectively same as here. Here we had they both send data overlapping in time. Collision, that was bad. It can happen on the RTSs as well. They both send the request to send at the same time. Collision and we'd need a retransmission. The difference is that normally data is much longer than an RTS. An RTS say is 15 or 20 bytes. Data is 1,000, 1,500 bytes. The chance of two large frames overlapping in time is much larger than the chance of two small frames overlapping in time. So it depends upon when A and C start to transmit. The larger the frame, the larger the chance that they will overlap in time like here. By using smaller frames, the RTSs are smaller frames, there's a smaller chance that they will transmit the RTS at the same time. So there is a chance of collision using RTSs, but it's much smaller in general than if they were using just data frames. And that reduces the chance of collisions. What we'll do tomorrow is go through the calculations for this RTS-CTS example in more depth, come back and look at the probability of collisions, and then I think we can summarise on the main parts of the performance of wireless LAN. That gets us mainly to the end of the wireless LAN performance. There's a few more slides remaining, but I think we've covered most of them. So make sure from today you understand the details of basic access, how it works, how to calculate the performance, and understand the basics or the idea of RTS-CTS. And tomorrow we'll cover the details of RTS-CTS and try and finish this topic. Any questions on these cases? Yeah. Whenever we've got, how do we know when to send the RTS? A has data to send. Okay? A has something to send to B. So the procedure divs back off, then send the RTS. Same as... Yeah. Highly likely. Which one? This looks wrong. 416 should not be after 438. You will fix that and find the right answer. That is your task. I will try and find the solution tomorrow as well. You'll see...