 So really we need to finish on RTS-CTS and how that helps us in some cases, especially in the cases where we have hidden stations. Before we explain that, yesterday someone pointed out an error in another area in one of my pictures. So I think this is in the handouts I gave you yesterday, number five, the fifth diagram. It was the case where we used basic access and there are hidden terminals, AMC are hidden from each other. And there was an error in one of these numbers and this is a fixed version. So in yours, this number here is I think 416, it should be 466. So you can fix or change that in your handouts or you can download this updated picture. So this number should be 466 and in yours the back off was 27 slots. So the number here was 27 so that I didn't have to fix all the other numbers, I changed it down to 21. So in your handouts you should change this number 466, what is now 466 to be from 416 to 466 and the back off from 27 down to 21 and then I think you'll get the correct answer, the same answer, that's just the correction from yesterday. And of course all these pictures are on the website to be downloaded if you need. The final picture we went through and we'll go through again is how we can use RTSCTS to handle this situation or improve the chance of avoiding collisions in the presence of hidden terminals. So the problem with hidden terminals or hidden stations is that for example when two clients are on either side of the access point but the clients are outside of each other's range, they cannot sense each other. When they sense the medium to check if someone's sending they don't know if the other one is sending. The result may be both may transmit to the access point causing a collision at the access point. So here's an extreme example where A has some data to send, starts transmitting and then some time later B has some data to send, it starts transmitting because it doesn't know A is sending. Even if there's a small overlap, so from the access points perspective it receives everything from A and then there's some, just a small overlap in time of the transmissions. We consider that at a collision and it's very difficult for the access point to understand either of those transmissions. So from the access points perspective this small overlap in receiving two transmissions means both of them are lost, it cannot understand either, so that's our normal assumption. If any part of the transmission overlap we lose them both. So that's a problem, the idea of RTSCTS, request to send, clear to send is to ask the other node, can I send? And the other node, the access point, may respond with a clear to send and then send the data. So it looks more like this, where client A has some data to send, sends a short RTS message, I request the opportunity to send to you. If the access point knows that, or doesn't think anyone else is sending at the moment then it will send back a clear to send saying no one else is sending at the moment, you are clear to send your data to me. And once we receive the clear to send we can transmit the data. In this case importantly the clear to send informs B that someone else is about to send. Recall it's a broadcast medium, when access point sends a message to A, in this case the clear to send, it's also received by B. Even though it's destined to A, B receives it and B realises, because it's a clear to send message and it's to someone else it realises that someone else is about to send and therefore it will not attempt to send, it will defer. And the duration that it defers for is informed by the access point because inside the CTS message there's a field that stores the duration of the upcoming transmission. So both the RTS messages at the RTS message and CTS message have a field in there indicating the duration of the upcoming transmission. So B defers, doesn't attempt to send, A sends its data, an act comes back and then B can start again, diffs, back off, RTS, CTS, data, act. We don't get a collision in this case. Collisions are bad because they usually mean a larger contention window, a retransmission, it takes much more time. That's the idea of using RTS, CTS to handle hidden stations. The overall trade-off is without RTS, CTS, so when we use what's called basic access, just send the data first, more chance of collisions when we have hidden stations. With RTS, CTS we can reduce the chance of those collisions, which is good, but we introduce some extra overhead. Every time we send data now, we must waste some time in sending first an RTS, getting a CTS back. So we're less efficient. So there's a trade-off there. If we're likely to have collisions, then using RTS, CTS can be beneficial, but if you're in a scenario where collisions are unlikely, then using RTS, CTS is not so beneficial. So there are two types of frames that we've introduced now. So we now have four frames in the data transfer phase. They're all small, so 20 bytes for RTS, 14 for clear to send, about the same size as the AC, small compared to most data frames. And both the RTS and CTS have an extra field to indicate how long the transmission is going to take, called the duration field. And that describes the steps. And the example we have, you have in front of you in the handout, we see those steps in practice. As normal diffs back off, then send a request to send. And between frames, in the same way as between a data and an AC, we have a short interframe space. Between a RTS and responding, we have a short interframe space. Between the CTS and responding with the data, we have a short interframe space. So between frames, within the one data transfer, this short interframe space is used. Really the short interframe space gives the device enough time to detect the signal, detect the received frame, and switch its radio from receive mode into transmit mode. Our radios usually either, we use half duplex, we're either receiving or transmitting, not both. So B receives the request to send, checks it's okay, switches from receive mode to transmit mode. The short interframe space gives the receiver enough time to switch back to transmit mode and then send, transmit the CTS. It would be better if there was no short interframe space, but to give the hardware a chance to check the frame and swap between transmit and receive mode, it's defined as a short period of time. Many RTS and CTS carry a field including the duration of the upcoming data transmission. It can be calculated because A can predict, it knows in advance how long it will take to transmit the CTS in response, it knows how long it's, how many bytes of data it has, so it can calculate how long the data transmission will be, it knows how long the Ack will take to get back, it knows the short interframe spaces. So when we send the RTS, we can set the duration from here up until the expected finish time. In this case 238, 10 plus 20 plus 10, 168, 10 and 20 and you get the duration of the expected data transfer. Check that in the RTS. If we add up the times, the short interframe space is 10, CTS is 20 in this example because we set it at 15 bytes, 20 microseconds, short interframe space, 10 data, we know in this example is 168, I know because we calculated before and you did in your quiz, short interframe space here, 10 and the Ack 20, what do we get? You add them up and you get 238. That's set in the RTS field, the RTS duration field. So when B receives the RTS, it knows someone's about to transmit or the data is going to take that duration and when it responds with a clear to send, it sets the duration to be the remaining, which is 30 less than the RTS duration because it RTS is from here to the end, CTS is from after the CTS to the end, so 238 minus this SIFS and CTS minus 30 from here to here, short interframe space, data SIFS, Ack and in this case, our hidden terminal C doesn't receive that request to send because it's too far away, but because it's within range of B, it does receive the clear to send and it reads the duration field and realises someone else is about to send for 208 microseconds, so let's defer for that time. What does NAV mean? The name of the time it defers as a technical name, so you'll sometimes see it come up called the network allocation vector, not so useful for describing what happens, but there's a name and you may see it in some textbooks or some specifications, the network allocation vector, it's described in the lecture notes on one of the slides, but the main purpose is that from the CTS, C knows to wait to defer for some period of time. It doesn't check the medium, it doesn't check if someone else is sending, it automatically assumes someone else is sending and when that period is up, then it can start again, it defs back off and so on. Any questions on how RTS-CTS works? The question I think and was similar yesterday, what if C transmits a RTS at the same time as A? Is that correct? Then we can have a collision. So in this specific example, C wants to send at time 180. I made that number up. Where did it come from? Where does 180 come from? I made it up, good, but in real life, what triggers C wanting to send data? An application wants to send data, so C is a laptop for example, someone does something on their web browser and that data flows down to the Mac layer, the wireless LAN card and that triggers the Mac to say okay now I have data to send. In this case, I chose that the user did something and at time 180 after station A started, the user had data to send. So that's why I say I made up this number 0, 180 and so on, think of it as the time at which the user has data to send and that is random effectively. We cannot predict that in advance in most cases, it depends upon what applications being used, what the user is doing, so it's hard to predict what this will be. In this example, if it is 180, we see station C doesn't get to start the RTS. What if it was slightly earlier? What if at time 112, I got it right, at time 112, C wanted to send data. Instead of time 180, what if the user at computer C did something a little bit earlier and as a result, data is to send and C has data back here at time 112. It starts the diffs, it takes 28, it takes us to 140, it starts the back off, can it sense anyone sending at time 140? No, because at 140 no one is sending, so it starts the back off and three slots was the back off, so it would finish at 167. Does it sense anyone sending? A starts sending at 163, but C will not sense that because A is too far away, so the result is that C will transmit the RTS because it starts at 167, no one else is sending so transmit the RTS and the result will be C is transmitting the RTS from 167 for the next 20 microseconds, 187, but at the same time A is sending its RTS between 163 and 183. From B's perspective, there's a collision, so there can be a collision on the RTS there. 163, there's a collision whenever the two frames overlap in time, so if let's draw A, if A is transmitting at, in our case A is transmitting at 163 to 183, if A and C are transmitting the frame to B and they overlap in any time, they don't have to fully overlap but partially overlap, then we consider a collision. So if this started two microseconds earlier, 110 still would have a collision, so for different values we still get a collision. And if there's a collision, similar what we get with a collision in the basic access, there will be no response, they will time out, increase their contention window and try again. And try again, which means diffs back off with a new value, RTS, CTS, until we get it right. The duration to send one data frame depends upon the data size, the amount of payload, in our case 1,100 bytes, header and the data rate. In our case it was 168 microseconds, if you can calculate those values. If you wanted to calculate the throughput, in this case we could, if we, throughput we can look at it from two perspectives, the throughput of a station, looking at just one station, for example the throughput of station A or the throughput of station B or C, or some, sometimes more useful is the throughput of the network. Consider all the stations. In this case, we've got three stations, between zero and the end time, 734, how much data, how much payload is delivered? One two times 1,100, 2,200 bytes. Throughput would simply be the data, the payload size divided by the total time, 734. In the previous case, when we used basic access, when we had a collision, it's still two payloads were successfully delivered divided by 908, the throughput would be lower in this case because the total time is larger. So by using, using RTS-CTS, we can avoid collisions and potentially increase the throughput. Because with RTS-CTS, it's 2,200 divided by 734, without, it's 2,200 divided by 908, which will be smaller, smaller throughput. Any other questions on RTS-CTS? So you can do the exam next week. Let's return to this point. We can still have collisions on the RTS. In basic access, we saw a collision between the data frames. Potentially with RTS-CTS, we can have collisions between RTS frames. So we can still have collisions in RTS-CTS. What's the chance? Or which one is more likely? A collision in RTS-CTS or a collision in basic access. Which one's more likely? In the same hidden terminal scenario. If we're using RTS-CTS, we can have a collision amongst the RTS frames. If we're using basic access, we can have a collision amongst the data frames. Which one is more likely to occur? Why? So data frames, why the data frames? Because they're longer, they're larger. And let's try and analyse that in a very informal way. So with basic access, here's our data frame. Actually, I'll make it a bit longer. Emphasise the fact. Data frames are normally much larger than the ACK, RTS and CTS frames. So we have a long data frame. And so I'll just draw A and C. Assuming A transmits the data frame at this point in time, at what time can C transmit the data frame, its data frame, such that it causes a collision? Well, a collision is when the two data frames overlap in time. So if we started transmitting here, and there was a small overlap here, that would be considered a collision. Because they are overlapping in time. If C started transmitting, so this is C, let's say here, there'll be a large overlap in time. All this period will be an overlap, a collision. If C started transmitting, of course, at the same time, obviously there's a collision. The entire frame overlaps with A's transmission. If C starts transmitting just before A finishes here, there's a collision. So if C starts transmitting the data frame, such that its data frame will overlap with A's, we get a collision. What's that total time? If we start here, through to here. If C starts transmitting anywhere in this time, there is a collision. What does it depend upon that time? The data transmission time. In our example, it was 168. This is 168. So let's put some numbers to it. So what's this time? It's double 168. What is that, 336? That is, A starts transmitting here, it lasts for 168. C's data transmission also lasts for 168. Of course, if they start at the same time, collision, full overlap, if C starts at a time such that the end of their transmission just overlaps with A's, a collision, which is 168, or slightly less than 168 microseconds, before A starts. Similarly, if we start just before A's transmission completes, we'll get a collision. In summary, if C starts transmitting any time within this 336 microseconds, we get a collision, or generally double the transmission time of the data frame, assuming they're the same size. So any time in which they both of them overlap, assuming they're the same size, then if A sends here, if C starts any time the data transmission before or before it finishes here, collision occurs. The same applies if we're using RTS-CTS. Try and draw it. We have A transmits an RTS at some time. Let's make it a bit bigger. Just R. If C starts just before or before such that they just overlap here, a collision, or if C starts anywhere between there and just before A finishes its transmission, a collision where this time is 2 times the RTS time, and our example is an RTS is 20. If C starts its transmission within that period of 40 microseconds, we can get a collision because the two RTSs will overlap, where 40 is 2 times the duration of a RTS. The same concept here, if we have the data using basic access, if C starts the transmission any time between in this period of 336 microseconds, we get a collision. The next thing is, well, which one is more likely? Assuming they want to start transmitting at random times, remember the start of transmission depends upon the user. We cannot predict that very easily. So assuming A transmits at some time, C starts at some random time, the chance of C starting in a period of 40 microseconds is smaller than the chance of C starting in a period of 336 microseconds. Therefore, the chance of collision in the RTS-CTS case is smaller than the chance in the basic access case. That's difficult for some people to get their heads around, so any questions on the concept there? With our example, the point is that data frames are usually much longer than RTS-CTS. Collision occurs when the frames overlap in time, even partially, and then the question is, what's the chance that two stations transmit in some period such that they overlap in time? Well, that chance or probability depends upon how much time can overlap, how long the frames are. The longer the frames, in the data frames here, the more chance that two stations transmit in an overlapping time. The shorter the frames in RTS-CTS, the less chance that they'll transmit such that they overlap. And hence less chance of collision between the RTS than there is between the data frames. You can actually do calculations on how that probability is, but we're not going to... Any questions on that? Hmm? Where is the meeting room? You'll learn to read. What if C starts sending the RTS? B may not respond with the RTS, and it will not be clear to send. If we get a collision there, that would be a problem for B. It will not respond. If the RTS is fully received before the CTS, then B can send the CTS and A and not send the CTS to C. Yes, but everyone who... Because it's a broadcast medium, everyone receives it, and they take notice of the CTS. So, CTS is sent from B to A, but C also receives it and processes it and listens to what it means, in this case that someone else is about to send. So if C sends the RTS to B, since B has already received one from A, B will not respond with its special CTS to C. So C will have to wait. Yes, we could get a collision there, and it will not respond to either of them. Any other questions about the chance of collision? You don't have to calculate the probability, but just understand why with longer frames, it's more likely than with shorter frames. And that's the big difference. RTS is generally shorter than a data frame, much less chance of collision. But still A chance. How do I get the answer? Duration of 2, 3, 8. So in the RTS, station A has data to send. It does its diffs, it's back off, and then it sends the RTS, and it sets a field inside the RTS, the duration field, and it calculates that field. How does it calculate it? It predicts how long it's going to take to successfully finish this transfer. And it's easy to predict because from now on, from after the RTS, that duration should be the duration for one sys, plus one clear to send, plus another sys, plus the data transmission time, which can be calculated because we know A knows how much data it has to send, it knows how fast it's going to send it, plus the sys, plus the act time. So it calculates basically from 183 up to 421, which is 238, and sets that value in the RTS. So that's the duration until the end of a successful transmission, or successful data transfer. And the 208 is the same, except it's from this point to the end. And B can calculate that. If you want to find the throughput of A, station A, finding the throughput of the station A when there's just one data frame is not, in fact, the finding the throughput of anything when there's just a few data frames is not statistically, normally statistically accurate. So from A's perspective, if I asked you to find the throughput, I think you would say from here to the end of the act. But in fact, that's the throughput of one frame. In reality, you send many frames, and you need to calculate across all those frames, and that will be more accurate. All right, the throughput of the network is from here to the last, to the end of the act. So think of from the start of the diffs to when the act completes. That's the time. But in practice, throughput is normally calculated across multiple frames, because it varies. So longer frames, more chance of collisions. Hence, we have this short RTS frame, and after that, the collision should be much less likely because the CTS informs others that we're about to transmit, and they should defer what's remaining. What if there are no hidden stations? Which one's better? Basic access or RTS-CTS? Basic access. If there's no hidden stations, in most cases, then there'll be very few collisions. But we still can have collisions. How do we get collisions? One way to get collisions is there's the hidden terminals. What's another way? How do we get collisions? We've seen two cases. The same back-off time. So here's one collision due to hidden terminals. That's one cause of collisions. Another one which we saw earlier, same back-off time. So there's still a chance of collisions even if there are not hidden terminals. Similar, using RTS-CTS can reduce the chance of collisions. So in general, so there's a trade-off now. RTS-CTS, better to reduce collisions, but more overhead because we have to send an RTS and CTS. So which one do we use? In practice, we use both. So we see the trade-offs between the two. Your wireless LAN device can switch between them. And the way that it switches between them depends upon how large of the data frame is. So for every data frame your computer has to send, it makes a decision. Do I use basic access or RTS-CTS? So you normally don't use one or the other. It depends upon the data frame size. There's what's called an RTS threshold. Some value, a parameter for your wireless LAN device, if the data frame is smaller than that value, then basic access is used. If it's larger than or equal to that value, that threshold, then it will use RTS-CTS. The idea here is that when you have a large data frame, the RTS-CTS overhead is relatively small compared to the large data frame. And the overhead is not so significant. So it's better to protect against collisions. But when you have a small data frame, the overhead is quite large. So it's better just to go to basic access. One way to illustrate that, RTS, CTS, small data frame. Small data frame. Let's say we have a small amount of data to send and then act. For the total time, there's also a diff, some back off, back here, the data transmission is just a small percentage of the total time in this case. But if we have a large data frame, we have a RTS, a CTS, and then a large data frame, then an act, then the data transmission, especially the payload, is a larger fraction of the total time. So when we have small data frames, RTS, CTS overhead is large and therefore inefficient. But with larger data frames, the overhead is relatively small. And therefore it's okay to cope with that overhead. So when we have large data frames, use RTS, CTS, small data frames, stick with basic access. And often on your laptop or your mobile phone, you can change that parameter, the RTS threshold. There are other factors that determine what is the best value, when should, or what should the threshold be? It's hard to predict, it depends on other things like how fast or how often stations are sending, how many stations are in the network. We've considered cases of just three or four stations, A, B, C and D. What if there are 20 stations, it gets more complex and there are more chances of collisions. But RTS, CTS has extra overheads and there are even other problems which we don't attempt to describe that can arise. So it's a complex relationship between using basic access or using RTS, CTS. In practice, the threshold defines which one is used for each frame. I think we're about finished. We'll go through a few of these slides, but I think you've seen most of this or we've covered most of the remaining slides, at least the things that I want to cover earlier and in the quiz. So any questions before we quickly cover the remaining slides? There may be one or two new things that we'll cover in these, but most of it we've seen. This is just some summary of some of the performance issues of the MAC layer protocol. What we've just looked at, DCF. We've calculated throughput in some cases, so we've seen the physical layer offer some raw data rate, the speed at which we can send our bits. So we can calculate some transmission time, but unfortunately not all time is spent transmitting the frames. Some time is spent sending headers, control frames, interframe spaces that is not sending, waiting for back-offs, and sometimes spent retransmitting. They all reduce the efficiency of our MAC protocol or reduce the throughput. So the throughput of the MAC layer is the rate at which the user data, the payload, and importantly successfully delivered at the destination. Not the rate at which it's sent, but if we have a collision, that doesn't count as successful delivery, the rate at which it's successfully delivered. We've done some calculations of throughput. We've seen yesterday and also I think in the quiz, if you've done the quiz, you'll see that you need to calculate the throughput in some cases. We've seen these parameters, so the parameters that we've used differ for the different physical layer. And I think you did this calculation or close to it in the quiz, which is this approximation of if there were no collisions and no deferrence, what's the best case we could get? Well, it's diffs plus back-off plus data plus diffs plus act is the total time. And here's some values, for example, or average back-off, we said if we choose a number between zero and 15, on average, we choose seven and a half. So the total time is our diffs plus seven and a half times nine or the slot time depends upon the data size. In this example, it's 1,500 bytes plus header divided by the data rate, 54 megabits per second plus sifts, here my act is 14 bytes. That would take 334 microseconds, giving us a throughput of just less than 36 megabits per second. So in this case, with 11G, data rate is 54 megabits per second and throughput, 36 megabits per second. Note here, I assume the act was sent at the full data rate, 54. In some of the other calculations, we said it was sent at six. In an exam question, I would say, what is the control rate or what is the rate at which acts are sent? So you can't get better than this. In fact, most cases, it'll be worse than this because we'll also have some deferrence sometimes and some collisions. So this is the best case we can achieve. If we have RTS, CTS, there's an additional interframe space, RTS and CTS time. So the total time increases, therefore the throughput goes down and we need to consider collisions and deferrence. Yep. In this example calculation, yes, I set it as the data rate, it's the best case we can do. I think in some devices, you can set such that all transmissions will use the maximum, but I think it's probably incompatible with the older devices, okay? So this 54 here, the rate at which the act was sent, I've set it as the fastest possible for this calculation. But the standard normally requires you to send it at a lower rate. It may vary, depends upon the physical level. Use the rate that someone tells you, I think. If nothing said, you use the data rate, use the same. So if I say the data rate is 54 and nothing about the control rate or the act data rate, then use 54. It depends upon, nowadays it depends more upon the implementation. Some optimizations, I think, can use a higher rate here. But in the original standard, it was much lower than the maximum data rate. Of course, the data size may vary as well. Here's 1,500 bytes, it may be a different size. And that would change the performance. Maximum payload in the wireless land frame is 2,312 bytes, but because our wireless land normally connects to an ethernet network, a wired LAN, the maximum in a wired LAN is normally 1,500. So in practice, usually in a wireless LAN, the maximum is 1,500. It's limited by the other network we connect to. So if you capture packets, if you observe packets in a wireless LAN, I don't think you'll see larger than 1,500 in most cases. So the throughput, the payload, divided by the time it takes to deliver that payload. I think the other issues we're not gonna cover, security we're not going to cover. There are other issues, for example, more practical issues of how many access points do you need to cover some area? What should the RTS threshold be? How do you cover a large area with multiple access points? For example, different frequencies and so on. How do you provide security? How do you give priority to applications? They're all important issues, but we will not try and cover them in this course. So there are many other features of wireless lands that try to address these. The one, in fact, the one, or two last things that we'll mention, that's somehow related here, is with our throughput, for example, our 36 megabits per second approximately, that's our, let's say, our network throughput. You have an access point, and you have one client sending data to that access point. Then according to this calculation, the best throughput we can get is about 36 megabits per second. What if I have an access point and two clients sending to the access point? What's the best throughput that each client could get? Two laptops associated to an access point and want to send data via that access point. What's the best throughput that your client could get? Your laptop, approximately. Draw that. So let's say an access point and one client and we get approximately 36 megabits per second. Another scenario, one access point, two clients, no hidden terminals, keep it simple. What will client one get approximately? Throughput. If no collisions, okay, keep it simple. If there are no collisions, client one should get client one 36 and client two twice, two times too much. 18 and 18 is about what they'll get. They must share that 36 amongst them. And that's shown in most of our diagrams here. We'll come back to a simple one. Here just with say B is our access point, A and C want to send. The way the MAC protocol works, if it works well and there are no collisions, is that just one station is sending at a time. We see that here. A is sending its data, then C is sending its data. And then maybe A wants to send more data, A then C, all right, they may alternate. Maybe A gets to send two before C gets to send. But the idea, only one station is sending at a time. So this 36 megabits per second, we can think of it as the network throughput. If there are two stations, that must be divided amongst them. You could calculate it here. And I think we did calculate similar, that from zero through to a six, one, four, the network throughput is the two times our 1,100, 2,200 bytes divided by our six, one, four. Half of the time A is sending its data, half of the time C is sending its data. So effectively C1 gets 18 megabits per second and C2 gets 18. The network throughput is 36. The way the MAC works, only one station sends at a time, so we are effectively dividing this network throughput amongst those stations. C1, C2, C3, what do we get? The network best we can get is 36. Each station only transmits one third of the time. In this example, A is sending its data, if we consider the data transfer time, A is sending half of the time. The other half, B is, or C is transferring its data. If we had three stations, A would be transferring a third of the time, station two a third of the time, station three a third of the time. So with three stations, 12, 12 and 12. So we effectively divide that network throughput by the number of stations and that gives you the per station or per client throughput. Maybe one way to draw that. You'll see many confused faces. Without drawing the details like we have on the screen, if we considered many data frames between A and C, I will not draw B, the receiver. A has its data transfer period. So this is the total time from zero up to 361. And then C got to send its data. And then if they have more data to send, maybe it's A and then C, maybe A and maybe A gets two times to transmit and C, if the MAC works well, only one of those two are transmitting at a time. So over some large time period, A is transmitting half of the time, B is, C is transmitting half of the time. Therefore, they get 50% of that throughput. In general, with N stations, we divide that network throughput by N to get the per station throughput. In practice, you have an access point with 10 laptops associated and transferring data, then that throughput of say 36 megabits per second is divided by those 10 stations, 3.6 each. The more users you add, the less each individual user gets. It's in fact, even worse because this considers clients sending to the access point but also the access point needs to send to the clients. So it's even less per station. In fact, it'll be divided by four. Access point gets a quarter, C1, C2, C3 all get one quarter if we consider all four of them sending. So in general, the number of stations sharing the medium must share that throughput amongst them. The more stations, the less per station throughput. In the MAC protocol, an access point and a client are treated the same. So an access point is just another station. So in my case, I said B could be the access point in this case. It follows the same rules. In practice, normally clients send to the access point and access point sends to the clients. So we'd consider this, in fact, four stations wanting to send. It's even worse than that because often the access point sends much more than the client send. So it gets more complex than just dividing by four in that case. If there were three clients sending, access point for some reason didn't want to send any data then it would get this 12. If there were three clients sending and the access point was wanting to send the same amount as each of those clients, then it would be 36 divided by four being nine. In practice, the access point often sends much more than an individual client. So we cannot always easily calculate like that. No, that's this issue with, let's say, if the clients want to upload a file, okay? All clients are trying to send to an access point and out to the internet. Then an access point has nothing to send in response. Then we'd get approximately this, 36 divided by the three clients wanting to send, 12 each. If now C1, C2 and C3 are sending the access point and also the access point is sending some data to them and the access point is sending about the same amount as each client, then it will be 36 divided by four. We effectively have four stations wanting to send and it will be nine each, nine for the access point, nine for C1, C2 and C3. So the access point would get the same as the individual clients now and that's where the half-duplex situation arises. In reality, it's more complex in that normally access point is sending much more than the clients and it's hard to calculate what the answer will be in that case. This is just an approximation. The best rule to remember, you divide the network throughput by the number of stations that want to send data. That brings us to the end, I think. Any questions in the last 10 minutes? Let's say some things before any further questions about the exam. As I said earlier, I haven't written the exam yet so I don't have any hints but other than saying it will be similar to previous year's exams or similar types of questions. By Monday, at the latest, I'll send an email to the list saying some hints about what's in the exam after I've written the exam. But it will cover of the three topics that we've covered but of course, each of them in some detail. Yep, and that's the next thing. Yes, the assignment will be covered in the exam. So everyone has set up their access point. They know how it works. There may be some very basic questions about the access point. Very, very basic. Nothing that I don't think you would have not seen. For example, what is the maximum data rate supported by your access point? So what standard is supported by your access point? But not much harder than that about the assignment. What the assignment will do is now that we've finished covering how wireless LAN performance works, straight after the midterm, you will have to measure the performance and report upon that. So that will come straight after the midterm. But maybe one or two very short questions about the access point or about some practical aspects of wireless LANs.