 So, what we have seen is how the MAC layer works, how the routing layer works, so now what we will see is how the TCP layer works, what is the impact of wireless and mobility on the TCP traffic. So, just to quickly recap what is TCP, this is basic undergraduate stuff. So, TCP basically does reliable order delivery, it has a notion of acknowledgments and retransmissions, it maintains end to end semantics, so the acknowledgments are sent only after the data has received the reach the receiver, it implements congestion avoidance and congestion control. So, the basic mechanism in TCP is you have a TCP sender and you have a TCP receiver, it does not matter whether these are connected through wired links or wireless links, it does not matter whether it is a single hop link or a multi hop link, this is the basic behavior of TCP. So, you have a TCP sender, so first what will happen, TCP sender will send one packet assuming that the handshake, three way handshake has happened and all that, so the TCP sender will first send a packet, at this point the TCP sender will set up what is called the retransmission timeout, it will set a value for the retransmission timeout saying basically what that means is, if I do not get the AC before this time, so this is the data and this is the AC, so if I do not get the AC before this retransmission timer expires, I am going to assume that this packet has got lost and I am going to retransmit the same packet again. So, that is your retransmission timeout, retransmission timeout which is set, so that is one thing to remember, the other thing to remember is the TCP sender need not send just one by one, but it can send multiple packets at in the same round, so I can send four packets at one time and get back the acknowledgement for all four of them, that is also possible. So, now given the question that how many packets should I send at any given point of time, so how do I begin the whole process, I do not know, see basic question is the two things that TCP does not know, one is what value and what is the other thing it does not know, how many packets, basically these are the two key questions facing TCP. Because if I get this wrong, what will happen, if I get the RTO value wrong, then either I am going to wait for too long before sending the retransmission, the packet may have actually got lost, but I may just be waiting because I have a very large value of my RTO timer or I may be duplicate transmissions doing duplicate transmissions unnecessarily, if I am just going to wait for a short duration, the packet may have got through the acknowledgement may be on its way, but I may have panicked and send the packet again, so it is very important to get this RTO value correct. And the second question is so the impact of getting this wrong is retransmission that controls your retransmission and the other thing is how many packets to send in each round, how do I know that, what does it control, that decides my throughput, that decides so if I am going to send one packet wait for the acknowledgement, send another packet wait for the acknowledgement, then my throughput is going to be lower correct, so how many packets to send in each round is what determines the throughput of my TCP connection correct, the throughput of the connection will depend upon how many packets I am able to send in each round. Now what is the problem in this case, the problem is TCP does not know what is the network on which it is operating, it does not know whether the receiver is on a machine which is on the same LAN or it does not know whether the receiver is on a machine which is half way across the world right, so it has to take different actions, but it has to figure both of these things out okay. So how does it do that, so which is the easier one, how many packets to send in each round is basically easy right, so what it does is the TCP sender will send one packet wait for the AC right, if the AC comes within the RTO, then in the next round I will send two packets, if both these ACs come back within the same round right, within the same RTO, then in the next round I will send four packets right, so that is the way I am going to grow the network exponentially right, so what is this phase called, slow start right, this is called the slow start phase of TCP. So slow start unfortunately it is a misnomer right, it is not starting slow, it is growing exponentially after four it is going to send eight, sixteen and so on right, so is this going to continue forever, if I continue with this, so slow start basically means double the number of packets, so it does a bunch of optimizations, so when I have four packets which are sent, instead of sending back four individual acknowledgments TCP can also send what is called a cumulative acknowledgement, it can send one single acknowledgement to acknowledge all the four packets that it has received okay, so that is called the slow start phase, now you do not want to go on doubling all the time right, if you go on doubling all the time what is going to happen, you are eventually going to congest the network, so after a certain point it enters into what is called congestion avoidance or linear increase, after a certain amount of time it is going to go into a linear increase phase instead of doubling, so if you look at it in a graph format this is how it will look correct, this is the basic TCP behavior, you have what is called the slow start threshold which is defined by as a default parameter okay, as a default startup parameter the slow start threshold will be defined, so you will keep on sending twice the number of packets up to the slow start threshold and after that point you are going to send linearly increasing the number of packets okay, when does this guy stop increasing and become flat, when the act stops coming I have to go down right, so if any of these acts does not come there are a whole bunch of actions which TCP can take okay, there are lots of flavors of TCP, TCP Teho, TCP Reno, TCP Sack, TCP new Reno, TCP Vegas, all of these each of these will take different actions depending upon what to do when an acknowledgement is lost okay, so in some cases if this acknowledgement is lost the TCP sender is going to drop it all the way down to 2 and start all over again okay, in other cases it will just drop up to this point, in some other cases it will drop only up to half of the current value okay, so different versions flavors of TCP take different actions okay, if everything goes on continuing fine then what happens does this go on increasing, so if this goes on increasing at some point you will reach the maximum capacity of the receiver's buffer correct, so at some point the receiver's buffer capacity will be reached and that and you are going to stop at that value correct, how does the sender know what is the receiver's buffer capacity in the window advertisement comes in the acknowledgement, so in the acknowledgement, so when the receiver is sending an acknowledgement back to the sender the receiver is just going to put one value into that acknowledgement saying that I have so much space left in my buffer, so based on how much space the receiver has got left in the buffer and what is the current current congestion value of the network the TCP sender can take the action, so we have seen that so this is what is basically termed as the window-based flow control or the sliding window protocol okay, so the window size is a minimum of the receiver's advertised window okay, advertised window means the value which I am putting in the acknowledgement and the congestion window which is the value that we have seen for this graph as it grows on okay, so that is called the sender's window, so sender keeps track of what is the acknowledgement that I have received keeps track of what are the packets that are yet to be transmitted and it knows that so many packets are in the flow okay, is that making sense okay, so then the key way of TCP detecting packet loss okay, so in the wired network remember that TCP was invented for the wired network, so in the wired network what is more, what is more frequent packet loss happens more frequently due to congestion in the network and less frequently due to errors in the network okay, because errors are very low okay, on the other hand congestion in routers is very high, so it's possible that packets get dropped due to congestion more frequently than due to errors right, so that is the what does TCP do upon detecting packet loss it's basically going to do a retransmission, so there are two things it can okay, so this is one of the actions that it can take, if there is a timeout what do you mean by timeout, the retransmission timer has expired but I have not got the acknowledgement back, so in that case I have grown up to this value, so one flavor of TCP will drop the sending window as it is called all the way down to 2 and restart again and the value of this threshold is half the value of the current window okay, so initially I started with a slow start threshold of 8, so I doubled quickly up to 8, then I started increasing linearly, when I was sending 20 packets at a time I didn't get one of the acknowledgments, when I missed one of the acknowledgments I just take very conservative action, so TCP in that sense is a very conservative protocol, I take a conservative action of dropping back down to the original value but instead of climbing exponentially up to 8, I will now climb exponentially up to half this value okay, so that is the mechanism of TCP how it behaves after timeout, then there is also another thing called fast retransmit, what is the notion of fast retransmit, fast retransmit happens when the receiver gets packets which are out of sequence okay, so suppose you are sending me packets 1, 2, 3, 4 okay, I got 1, I got 2, then I get 4 okay, so when I get packets out of sequence what the receiver will do is it will send duplicate acknowledgments very quickly okay, to signal to the sender that some intermediate packet has not yet reached okay, which again means congestion in my network right, but it's not as bad as the acknowledgement not coming back okay, so what TCP does in this case, so this is the other mechanism what it does in this case is instead of dropping the sending window all the way is going to drop the sending window only up to half the value and just directly enter the linear increase phase okay, so this is basic TCP behavior, what is throughput here, throughput is the area under this curve right, whatever is the area under this curve that is your throughput of the system correct, so what you want to do is you want to try to maximize the throughput of your network okay, so now what happens in TCP is because the wireless channel has bursty and random errors okay, TCP gets confused okay, so every time there is a packet loss TCP assumes that the packet loss has happened because of congestion in the network okay, so that becomes a fundamental problem from the view point of TCP okay, so what happens here is burst errors can cause timeout right, what is a burst error many bits many bits in error contiguously okay, when many bits are in error contiguously that's a burst error, burst error can affect more than one packet at a time, so it can cause timeout, random errors can cause one packet to get lost or one packet to not be assembled properly, so that packets acknowledgement will not come okay, so random errors can cost fast retransmit burst errors can cause timeout okay, the key problem is TCP cannot distinguish between packet losses due to congestion and transmission errors okay, so what happens is it unnecessarily reduces the congestion window and so your throughput is going to suffer, so I have let's say a fixed host I have a wired network and I have a receiver in this case TCP works fine correct, fine as in fine within the limits, so now this now I am going to take it to a scenario of having fixed host, I have a wired network, I have a relay or a base station and then I have a wireless network and I have a mobile host, so this is the scenario which I am now considering from just simple wired I have gone off to wireless and what is the third scenario, third scenario is I have multi-hop wireless okay, so I say the third case could be that I have host 1 wireless you know some other host k wireless host n okay, so we have to try to understand how TCP behaves in these three cases right, so in the first case we have standard TCP behavior whichever one is being implemented that standard TCP behavior is used when both the nodes are in the wired network right, so let's look at case 2 now, so let's try to come up with some solutions and then see what is the effect of these solutions okay, so this is our scenario what do we know, we know that in such a scenario where I have a fixed host a base station and mobile node on this wireless part of the link there could be burst errors or random errors as a result of which TCP is going to panic the TCP sender, so let's say this is my TCP sender and this is my TCP receiver, so this guy is going to panic if there is any packet loss here and the sender is going to drastically drop the throughput which again is going to lead to bad things because my wireless network already has very low throughput okay, so how do I fix this problem, so that is what we are trying to figure out how does TCP infer that, see as far as the sender is concerned okay this fixed host sender is concerned it does not know that is across a wireless link okay, the TCP sender cannot be notified that there is a wireless hop in this end to end connection right, if it knew that then it can obviously take some other action, so it has to somehow figure that out, so maybe that happens for some other network some other node which is in the wired network itself, so it should not do that for the node which is in the wired network right, so let's look at one by one the suggestions explicit packet drop notification, what do we mean by this explicit packet drop notification, so the router so there is a ECN flag right explicit congestion notification flag is there, so we can use the ECN flag and the router sends that to the TCP sender okay, so now the question is how does the router know why the packet got dropped, if the router is dropping the packet then it's okay right, do I know as TCP receiver does the TCP receiver know right, the question is does the TCP receiver know the cause of the packet drop right, so if the receiver could know that this is due to wired congestion or due to wireless error or due to wireless congestion or due to wired error, if it could classify the packet drop into one of these four categories okay, then I can send an explicit notification to the sender and I can tell the sender that okay see this packet got dropped for such and such reason and the sender can take appropriate action correct, how easy or hard is this, think of it like this, what is the information that the TCP sender have, TCP sender just knows that okay I am receiving packets, it can just know that I missed receiving one of the packets, how do I know why I missed receiving one of the packets, so receiver can notify to the sender by simply using the acknowledgments that is easy, how does the receiver know why it didn't receive a packet, I can't make that out okay, so there are lots of schemes which try to study things like inter packet arrival time okay, so if I am receiving packets on a continuous basis one every let's say a few milliseconds or so and then suddenly I lose a packet, then I will kind of guess that this may be because of an error okay, but all of these are guessing techniques okay, so there is no clear way of determining that I have lost a packet and explicitly notify to the sender the reason for the loss right, so what do we do, now that this is split into two parts, can I just divide the connection into two parts, I have one TCP connection here, I have another TCP connection here okay, so I handle the wired part and the wireless part separately okay, so that was the first suggestion that was tried okay and that is called your split connection approach okay, so this was one of the earliest strategies that was tried, the end to end TCP connection is broken into one connection on the wired part of the root and one connection over the wireless part of the root okay, so let's try to answer the other questions, where all do I need changes in order to implement this, do I need to make changes here at the fixed host, no, no I need to make changes at the base station right, who controls the base station the operator okay, so I need the operators help in order to implement this that is one thing, what else happens in such a system, suppose the fixed host has sent a bunch of packets to the base station and then the base station goes down momentarily before it has been able to forward the packets to the mobile host okay, what will happen, as far as the fixed host is concerned the packets have reached the mobile host right, see that is the key reason why we have TCP is that we have kind of blind faith in the guaranteed delivery semantics right, TCP basically knows TCP sender knows that if I have got an acknowledgement then the packet has reached the receiver right, so that kind of faith exists in the TCP, now what we are doing is the TCP sender has got an acknowledgement right, see TCP sender gets an acknowledgement from the base station but the packet really has not reached the mobile host, now if the base station goes down the sender would have removed the packet from its sending window saying that this is already acknowledged okay, so that is the key problem with the split connection approach okay, so what happens here is the packet comes from the application layer goes up to the TCP layer, this is one separate TCP connection and then from here there is a separate TCP connection which goes to the mobile host okay, so one problem is I have to maintain per connection TCP state right, what does that mean, for each connection that is made suppose from the mobile host you are connecting to let us say Google 3 times okay, so you will have to maintain state for 3 different connections here at the base station, if there are 100 such users who are doing that you have 300 connections which the base station has to manage right and it has to be able to do the splitting it has to be able to send the request on this direction, see as far as the sender is concerned the base station is the client right, the base station is the receiver, so the sender does not carry it is just going to send the packets off fast okay, so that is the key problem with the what is called the split connection approach, one of the earliest such mechanisms was called ITCP okay, indirect TCP, so there are many variants of TCP, there is ITCP, MTCP, STCP all kinds of variants are there okay, but we can just try to classify them into what is the approach that is taken okay, so this was one strategy we looked at, the second strategy is a split connection, the split connection the problem with split connection is end to end semantics are not retained or broken let me just try it broken okay, end to end semantics are broken and per connection TCP state okay, what happens if there is mobility in such a scheme, if I am maintaining per connection state I move from one base station to another base station what happens yeah, so when I am moving from one base station to another the entire state has to be moved from one base station to the other is that making sense otherwise it is as though all those packets got lost right, so mobility again entire state has to be moved from one base to another, so this is what is called hard state, if you do not move that state then again bad things are going to happen that is the meaning of hard state okay, soft state means if you do not move that state you are just going to lose some efficiency but bad things are not going to happen, hard state means if you do not move it or if you do not take that action then bad things are going to happen, what is the bad thing that happens as far as the TCP sender is concerned it does not matter whether the base station has gone down or whether the mobile host has moved you know it has received the acknowledgement, so it is going to assume that the packets have been delivered correct that is the key problem, so how do we fix this right, so the fix for this is in order to maintain the end to end semantics BS withholds AC, AC to fixed host until it receives AC from mobile host right, suppose I do that okay, I withhold the acknowledgement to the fixed host until I receive an acknowledgement from the mobile host, so what will happen in this case some other problem gets created right, so the moment I say I am going to withhold the acknowledgement now you do not know how long the wireless link retransmission is taking right, suppose the wireless link retransmission takes beyond a certain amount of time, so this can cause the fixed host to retransmit, so all the RTO calculations all of those things can go wrong, so what do we do right, so there are two three schemes I will just quickly tell you about one of them, so the other scheme is called a snoop protocol okay, you snoop on the packets that are going, so the base station is going to snoop on the packets, so it is going to buffer the data packets at the base station to allow the link layer retransmission okay and when duplicate acknowledgments are received by the BS from the MH okay it will retransmit on the wireless link okay, so this will try to prevent the fast retransmit at the TCP sender okay, so what you are saying is I am not going to do a split connection at all, I am going to allow a single TCP connection from the fixed host to the mobile host except that in between I am buffering okay, so when I am buffering packets in between this guy is continuously sending me acknowledgments, I am going to monitor the acknowledgments that are coming, if he says I have received packet 1, I have received packet 3 then I know that he did not receive packet 2, so I will just pick out packet 2 from my buffer and send it back to him locally instead of sending that acknowledgement all the way back to the fixed host okay, so this is one protocol again it has a problem okay what is the problem, if there are many packets in the, see this buffer is a very passive buffer, the buffer in the base station is a very passive buffer right, I am just going to packets are going past I am just copying one and keeping it with me, it has nothing to do with the fixed host, fixed host doesn't even know that the base station is copying these packets okay, is that making sense, the fixed host as far as the fixed host is concerned I am only communicating with the mobile host there at the other end okay, the base station in between is just copying some of the packets, if the mobile host is giving me an information that some of the packets are not getting through I am just going to do a local retransmission that is the only scheme here okay, suppose the mobile node is down then yeah how many times it should retransmit that's a question right, let's say the mobile node is up there is still a problem, the basic problem is now in this case snoop the word snoop should give you the hint right, I have to look into the packet okay, see I am looking into the packet to find out which acknowledgement does this correspond to okay, so now look at it like this TCP packets are handed over to IP packets right, it goes into an IP packet then it goes into a MAC packet right, then it is sent on the wireless link at the base station again it comes up to the IP level correct, now suppose I am doing encryption at the IP layer right, then no amount of snooping is going to help, if I have encrypted the packets at the IP level right, so if I am using SSH or one of those mechanisms then I cannot snoop into the packet to find out which packet is this, which is the acknowledgement to which it corresponds to okay, so the key reason why snoop fails is in the case of encrypted packets and all these other reasons are also correct but the main thing is what to do in case of encrypted packets okay, the other thing is what happens when we do handoff okay, so handoff we have already seen that in the split connection approach hard state must be moved to the base station, new base station okay, so in the snoop protocol it turns out that it is soft state why, because we still have a end to end connection, the TCP sender and the TCP receiver are end to end right, so that is why the state is soft state, the base station is just copying some of the packets right, so while the new base station builds new state packet losses may not be recovered but that is okay, so frequent handoffs are always a problem for schemes that rely on significant amount of state okay, so how do I handle handoff, last question on TCP, how does the TCP receiver prevent TCP sender from reducing window during a handoff, that is the question, so basically what I have is a fixed host, I have a base station 1, I had a mobile host which was connected through it and now I have a base station 2 and this mobile host has now connected to the new base station okay, so the mobile host has moved from one base station to the other okay, so now my problem is that there is it is going to take me some time to move from this base station to the other one correct, so if I take too long this guy is going to be looking at his RTO timer, the moment that RTO timer expires is going to take a whole bunch of drastic actions right, how do I prevent that from happening, so you have to fool the TCP sender in some way to prevent it from taking that action, how do I fool it, so the key thing here is what the receiver does in this case is you know it advertises how do you stop the sender from sending you packets without dropping the window, you have to advertise the window size of 0, see even in a wired network when the receiver is out of buffer space okay, what will it do, in the acknowledgement it will say 0, I am out of buffer space, what will the sender do at that point, it is not going to transmit packets or it is not going to set up any timers and all that, it is just going to wait right, so the same thing it can do at this point, it can advertise a receive window of 0 on the acknowledgement itself right, so the window advertisement goes on the acknowledgement, so I can advertise a receiver window size of 0 on the acknowledgement and so that sender will go into what is termed as the persist mode right, so advertise receiver window of 0 and puts the TCP sender into persist mode, so I will just send a one byte packet to see what the acknowledgement is, if the acknowledgement says that okay now I have buffer space then I can start sending packets again right, if the acknowledgement continues to say that I have no buffer space, I have not finished processing what you have sent, I will wait for some more time okay, so that is called the persist timer, so there are 3, 4 timers associated with TCP, one is the retransmit timer, the other is the persist timer, so the whenever the persist timer expires I will send a one byte packet, I will see what the acknowledgement I get, depending upon the acknowledgement I can now decide to either open up the window or go back into persist state okay, so that is basically what TCP will do okay, so now answer this question if it is fine, now answer this question that this mobile host does not know that it is going to get disconnected right, if I knew that I am going to get disconnected in the next second then I can send this 0 window advertisement, so there are multiple proposals here, so because mobile host does not know it cannot anticipate, see there is one scheme where I will try to anticipate that maybe I am going to get disconnected, so let me play it safe and send a 0 window advertisement okay, otherwise the mobile host will always come to know about the disconnection only after the disconnection has happened, if at that point you say okay let me send a 0 window advertisement anyway it is not going to get across right, so chaos has anyway occurred in the system right, so in that case what happens is base station will withhold AC for 1 byte okay and uses it for 0 window advertisement as it is called ZWA, so what it does is, so the mobile host is sending acknowledgments to the fixed host, so the base station will withhold the acknowledgement for the last byte, remember that TCP is a byte based protocol it is not a packet based protocol, if I have sent you 1000 bytes it is possible for you to acknowledge receipt of 999 bytes and not 1000 bytes right, it is not a packet based protocol, so what the base station can do is it can withhold the acknowledgement for the last byte, base station will know whether the mobile node has got disconnected or not, once it knows if the mobile node has got disconnected it can send the 0 window advertisement, when the mobile node connects from the other side it will now send a duplicate acknowledgement and the transmission can start okay, it sounds very simple, lots of issues are there which we do not have real time to go through, so when a new acknowledgement is received with the receiver's advertised window equal to 0, the sender enters the what is called the persist mode okay, sender does not send any data, when a positive window advertisement is received the sender exits the persist mode okay, so the key point is on exiting the persist mode RTO and the congestion window are the same as before, so this scheme so there are two schemes which are very popular, one is called MTCP another is called freeze TCP okay, so in MTCP the base station is going to withhold the act for the last byte, so the problem with MTCP is now you need help from the base station okay, the base station has to do some work for you okay, freeze TCP is the other scheme where the receiver itself is supposed to send the 0 window advertisement, so freeze TCP is kind of a predictive mechanism right, it will keep monitoring the signal strength etc. and when it the signal strength starts falling below a certain threshold it will send a 0 window advertisement okay, so TCP is also another very rich area for doing projects in, there are lots of such issues, there are lots of such, there are about at least I would guess close to a 100 such proposals already in existence on TCP, now various tweaks to TCP to make it work for different scenarios but still there is no very clear solution which will work well for wireless networks alright, so let us try to summarize. So TCP over wireless basically TCP assumes that packet loss implies congestion, now that is not valid in wireless environments right, so the invoking of the congestion control response is inappropriate, several proposals to adapt TCP to wireless environments, see you can always find cases where your proposal will work well right, so that is what that is how people write a lot of these papers, they will do experiments to show that okay, under these cases the proposal works well, there will be a whole and then a whole bunch of other people will do experiments to show that under these cases that proposal does not work well, so we will come up with a new proposal, so this happens all the time okay, so the question is what happens now in a multi hop scenario when there are two wireless hops, so we are coming to that when we talk about ad hoc networks okay, so in a multi hop scenario see already if there is a single wireless hop itself TCP goes kind of crazy okay, so in a multi hop wireless scenario nobody really knows how to set up TCP properly okay, so quite often this problem is compounded because of your 802.11 Mac which is being used okay, so 802.11 remember that string topology thing that we did okay, we did that exercise of RTS, CTS people interfering with each other and so on, now imagine that TCP data has to go in one direction, TCP acknowledgments have to go in the other direction okay, so the same node has to pass data in one direction as the data, pass the acknowledgement in the other direction, so it becomes totally chaotic as far as the TCP traffic is concerned, so because you are not able to get access to the medium access medium right, because you are not able to access the medium in order to transmit TCP can time off okay, so it becomes too complex for most TCP proposals to be useful in such scenarios, so anything that goes beyond two hops of wireless people just say looks like I do not need TCP for this maybe I can do with UDP okay, so most of the applications people tweak with the application nobody wants to do a FTP over a multi hop wireless link okay, everybody will talk about a voice over IP or a video kind of an application no very safe applications because they do not use TCP right, people say why do I need FTP over a multi hop wireless link or they will make it a fixed link, so those kind of strategies are used okay, let us look at ad hoc networks and we can if the question is still unanswered we will do that now okay, so multi hop wireless basically means that a packet may have to traverse multiple links to reach the destination, so mobility causes root changes basically that is what happens right, if this guy moves off from there, if this guy moves off from here then my root is going to break okay, so there are several questions that I need to answer, so suppose I have this is my original host movement okay, so A is talking to B and these guys move off like this and it becomes like this, so I have to route the data through the intermediate nodes, so there are several questions that I need to answer, first of all how do I find out where is B right, that is the first question that I have to answer, how do I know where is B, so the only solution here is to broadcast the packet, so I am not going to read out the applications of mobile ad hoc networks okay, there is a whole bunch of them, 802.11 is the MAC which is used okay, let us look at routing protocols, let us go to one example okay, so the key example here is that is a protocol the first one that was invented in this area was called dynamic source routing or DSR okay, so it works in a very simple manner, I have a figure here somewhere okay, so let us say this is the network okay, S wants to send a packet to D okay, so initially S does not know where is D okay, so what will you do, you will flood the network, so broadcast when you broadcast to everybody and ask them also to broadcast to everybody else that is called flooding right, so it starts from one point, it keeps spreading, it keeps spreading, so that is called flooding, so you start flooding on the network, so S will send a packet to E, C, B etc. etc., what will go in the packet, source and the destination to which you are trying to reach okay, so that is the broadcast transmission that happens, S is broadcasting with the source ID and it is saying that I am trying to reach D and it goes to all these guys okay, then what happens, these other guys, they also further broadcast the packet right, so that is the next thing that happens, they also further broadcast the packet while adding their names to the list okay, why is this required, so that I know what is the path, so that I can reconstruct the path, so I am just going to continue this broadcast by adding my own identifier to it and then at some point it is going to, so all these guys are doing broadcasting right, at some point the packet is going to reach D okay, so D knows that I have received a packet saying SEFJD okay and D also knows that I have received a packet saying SEGKD, so D picks one of them to reply, how does it reply, it reverses the root correct, it basically says that okay I got a request through SEFJ, so if I go via JFE I should reach S okay, so it just reverses the root there and it sends back a root reply saying that this is the way in which you can reach me and then the data can flow along the same path okay, so that is what is happening okay, D does not forward the root request because D is the intended target and then D sends back a root reply saying that this is SEFJ and D and then S can send the data along that route, is that clear, it is a very simple protocol, it was one of the earliest protocols which was invented for ad hoc networks and surprisingly it continues to be an efficient protocol even today in the face of lot of other you know inefficiencies, other problems okay, so can we say what are the problems in this protocol? So DSR, the basic summary is S sends a root request okay what is called a root request packet okay, all the nodes append their IDs and forward the root request D receives the root request and sends a root reply by reversing the root okay by reversing the order of the IDs that is how I know that then S starts the data transfer, so what are the problems that we saw, any node moves it does not have to be just D right, any node moves what happens right mobility can cause root breakage that is the key problem in your ad hoc networks any mobility can lead to root breakage in this yeah you have to go back, so whichever node discovers the root breakage, how do I discover that there is a root breakage, let us say let us look at this figure right, how do I discover that there was a root breakage, so let us say F moved away so how does E know that F has moved away? Correct, so that is why acknowledgement is very important at the MAC level right, so E has sent data to F at the MAC layer remember this is not the TCP layer, this is not the network layer even at the MAC layer there is an acknowledgement okay we are using A to 2.11, so E has sent RTS, it has got CTS, it has sent the data to F and then it has to get the acknowledgement back right, so at some point if F moves away E is not going to get the MAC right, so as a result of that you will know that F seems to have moved away or something has happened, so E sends back something called a root error message to S, so E sends a root error to S and now S can restart the whole process of trying to find another root, so that is the key one of the key issues okay mobility causes root breakage, so what people did was they kept on trying to optimize on this thing, so now they said that since E is trying to find a root through some other way, maybe I can cache these roots okay, so one optimization that was tried was is root caching okay, so what do we mean by root caching? See this node H for example okay, HS forwarded the message saying S wants to reach D correct, H has no interest in either S or in D, but still it has forwarded the message, what has it gained out of that? It doesn't know where S is, but it knows that if I want to reach S I can go via HBS because that is the way the packet has come to me right, so whoever is forwarding nodes to me right, whoever is forwarding the packet to me I know what is the path, so I have partial information about the network okay, so that's what the DSR guys initially tried to sell it as saying that we can have this root caching okay, then a whole bunch of people discovered that there's a lot of problems with root caching okay, so I may have cached this information, so H may have cached the information saying that this is the root to S right, in order to reach S I will go through B, but the cached information may be old, by the time B may have moved out okay, so in which case what you are trying to do is you have to now send the packet, wait for a root error and only then you are going to restart the whole transmission okay, so there are a whole bunch of protocols in this manner, DSR is one of them, another popular protocol is called AODV okay, which tries to fix some of these issues with DSR okay, AODV okay, that tries to fix some of these problems, then there are whole other category of protocols which try to you know what do you say, try to maintain a balance between proactively trying to maintain roots versus reactively trying to find the roots okay, so we won't go into all of them, what I will do is I will just ask a question okay, so is DSR clear to everyone okay, path is it the shortest path that it will choose or will it choose any path okay, so now here is a very important point in wireless so in wireless the shortest path need not be the one which has the least number of hops okay, in fact that is it is not known at all how to determine what is the shortest path okay, what is the shortest path, I mean if you think of it in English terms the path along which it takes me the least time to travel that is my shortest path right, but it turns out in wireless if you typically in the wired network that is going to correspond to the least number of hops right, but in wireless what may happen is the path which has the least number of hops may be the most congested, may be the one on which I have to wait for longest right, so it may happen that a longer path may actually reach faster than the shorter path okay, so it is a messy thing, so again there are schemes which say you know suppose all these networks are also battery powered right, so there are schemes which say that if I am going to use the same path again then I am draining the battery of those devices okay, so sometimes I will use this path, sometimes I will use other paths so that the lifetime of my network is increased okay, so if you look at the area of sensor networks this is a very important concept there, how to increase the lifetime of the sensor network, so you have thrown sensors on a battlefield let us say or on a no man's land or something and which are running this protocol in order to route the sensed information, now if you are going to use the same path all the time then some of the intermediate sensors are going to die and so you are not going to be able to maintain the network, so there are protocols which say that you know I will do a probabilistic path selection, so lot of work has happened in this area, we are just looking at the most basic thing okay, how is security implemented in these systems there is no security as of now okay, so I mean there are lot of things that could happen that it is easy to break the system right now right, because I could just send a fake route reply, security by and large is not a so much of data security but of denial of service security right, if I just send fake route replies or I could just flood the system with fake route requests right, I can bring down the system very fast, so there are some proposals to deal with security but none of them is really well established, see A22.11 became popular okay and when people were talking about ad hoc networks it turned out that A22.11 was a convenient MAC to assume, subsequently it also turned out that A22.11 is creating a lot of what do you say problems but there were already deployments by the time, so nobody now has the time to go and say that let us do a totally different MAC in ad hoc network right, so ad hoc networks means you by default you assume that A22.11 is the MAC, see if there is a base station somewhere then you can give the job of coordinating the MAC to that base station correct, but ad hoc by definition is saying that no all of you guys have come you have brought your laptops and then you want to start sending messages from one corner to the other while you are moving around right, so that is the whole thing, now there is no infrastructure here, so the only MAC that you can use is sense the medium, see if the medium is free and transmit, so my question is what happens okay, this is an example of an Earth symmetric link okay, A has a larger transmission range than B, so A is able to reach B but B is not able to reach A, so now DSR depends upon this fact that I am going to reverse the route right, I got the route request along one path correct, so I got the route request along this path, I have reversed the route in order to send back the reply okay, now if these are asymmetric links then what happens, my route reversal can fail right, so how do I fix it, what new route, so route reversal fails, so the fix is to do a route discovery from D to S okay, I will do a route discovery, I will do a new route discovery as though D wants to find a route to S, I will do a new route discovery from D to S and you know flooding is happening, and I will piggyback my route reply on the discovery, piggyback route reply along with it right, is it clear, S has sent a first route request but since the path is asymmetric D cannot reverse the route to send back the route reply okay, so but it knows the answer right, it knows that for reaching from S to D you go by S, A, B, C, D correct, it knows the answer but it has no way of conveying the answer back to S by simply route reversal, so what it does is it starts its own route discovery, maybe it will find some other path okay but along that path it will just piggyback this answer, it will just say that if you want to reach me you come by IBCD and that is the way asymmetric links can be handled okay, so what is the key thing that I am trying to say here okay, the key thing I am trying to say here is that if we do not keep into account what is the application right, now there are 100 proposals for various levels, if we do not keep in mind what is the application okay, so if it is an application let us say like disaster recovery scenario okay, you design your solution for that and then you take the solution and try to put it into another scenario 99% of the time it is going to fail okay, so you have to start from the application you have to say okay this is a military scenario I have a group of vehicles going in this direction, so that means you have mobility at the same time the relative mobility is very low okay, so you might design a routing algorithm for that, you might design a TCP strategy for that, if you just take that and put it into another scenario like you know people going to office and then going back home, so at all times in an ad hoc network you need to be able to always map all the way from the application down to the Mac, okay the whole your chain has to be complete, if at any one point your assumptions are not matching then the solution is going to fail okay, so how about TCP in these networks there are several factors that affect TCP wireless transmission errors okay, there is one interesting thing that happens with TCP in ad hoc networks okay, one is multi hop routes longer connections are at a disadvantage compared to shorter connections okay, can we think of any scenario in which mobility actually helps, see suppose I am trying to send a packet to him okay, so there is n number of hops in between wireless hops all right, so the TCP connection is going to be not doing very well okay, but if by cause of mobility I happen to go near him okay, then my throughput can increase okay, so again such scenarios have to be investigated, so sometimes that is what you know those are the kind of graphs which one sees, because when you are doing a random simulation you cannot make out what exactly is happening in the network okay, so sometimes mobility also helps because the source manages to come close to the destination okay, so in those cases mobility becomes a useful thing, more often than not mobility causes root breakage okay, so it is very difficult to make a you know categorical statement saying that if I do this strategy then it will always work okay, so ad hoc networks if you want to summarize you know routing is the most study problem, interplay of layers is being researched, this is what I mean meant by saying you know you have to start from the application, keep working down, make sure that all the assumptions are consistent okay, it is a fertile area for imaginative applications, how many maximum nodes can be there in an ad hoc network, it depends upon the application, now as far as deployments are concerned people have done for tens of nodes okay, people have not done deployments for hundreds of nodes okay, so the last topic of this session is wireless metropolitan area networks okay 802.16, now this is again getting very hot, so I debated a while about whether to introduce it or not but then I thought let me at least introduce you to the topic because it is a very vast topic in itself okay, I can go on for half a day just talking about .16, so what is the idea the key idea so now this is a mix of ideas that we already have seen okay, so we have seen GSM, so the reason why we saw two of these technologies in great detail which are the two GSM and Wi-Fi these two we have seen in great detail because those concepts keep recurring okay, so this is a mix of various ideas that we have already encountered okay, so the idea is that instead of having wires going from various buildings various locations okay I will have wireless links okay, so I basically create wireless links from a base station to the various customers try to achieve something like a cellular performance using something like a Wi-Fi link okay, so that is the basic approach, so the commercial name for 802.16 is Wi-Max okay or wireless man okay, so you may encounter these terms interchangeably in the literature okay, Wi-Max is basically a subset of 802.16 okay, so the goal is to provide high speed internet access without wires okay and to support applications with different QoS requirements okay, so the products are not yet out I mean the initial field trials for Wi-Max equipment is still going on okay, so I think there are a few deployments that are done but there is no large scale deployment of Wi-Max yet okay, so let us look at this what is our goal, our goal is to support applications with different QoS requirements okay, now we want to design a Mac, how do we design the Mac, what kind of a Mac shall we use, if I let you speak for 5 minutes and I let that person speak for only 1 minute I have supported different QoS requirements right, so a central node now given that this is a base station and a subscriber station model okay, a central node which captures both of these is the one which is most useful okay, so what .16 does is it allows 2 different types of duplexing schemes okay, either you do frequency division duplex or you do time division duplex, what this means is the following, so 802.16 you have a base station, so base station has let us say time, I have a frame time frame, I have subscriber station 1, 2 I have n subscriber stations, so what the base station does is first it does what is called time division duplex, basically means, so suppose I have 10 milliseconds of time, 10 millisecond frame, what it says is that let us say 6 milliseconds is downlink and 4 milliseconds is uplink, first thing it does is it splits the time into 2 parts okay, out of 10 milliseconds for 6 milliseconds I am just going to be transmitting in the downlink direction and for 4 milliseconds I am going to allow people to transmit in the uplink direction okay, why is this 6 and 4, there is no reason really okay, so it is just you just guess that it could be because mostly people are downloading rather than uploading okay, so the uplink you expect mostly requests, on the downlink you expect mostly to be serving web pages and so on okay, so generally when you do a TDD system people try to do a 2 third is to a 1 third kind of a ratio, on the downlink I send data to the various subscribers okay and on the uplink the subscribers are able to send me data okay, so that is basically what is happening, so suppose there is a voice call that arrives at the SS, the SS informs the BS saying that okay there is a voice call that has arrived, now please allocate a connection identifier for this voice call okay, so once the BS has allocated a connection identifier it is just like a token, so after that you do not have to put all these various headers, you are just going to refer to the call by that one number that is assigned to it, so what is the key question here again, how do I do the uplink allocation because that is what is going to determine my quality of service everything, so in the case of .11e we saw that there were 4 access categories right, what were those 4 access categories? Voice, video, FTP, HTTP, these are the examples right, so in the case of .11e this was called access category 1, access category 2, access category 3 and access category 4 right, in the case of .11e this is called UGS okay, UGS stands for unsolicited grant service, what this means is once a voice flow is set up, once a voice connection is set up right, the base station knows that every 20 milliseconds I am going to have a voice packet going in both directions right, so without the subscriber station explicitly asking for bandwidth the base station will allocate because I know the type of flow, so that is why it is called a unsolicited grant, without you asking me I will give you bandwidth because I know that the type of flow is a voice flow which has a certain periodicity okay, so that is why it is called a UGS flow, what about video? Video is called RTPS, RTPS stands for real time polling service, so how do I do allocations for this? Periodically I am going to poll the station and ask how much bandwidth it is requiring okay and I am going to have a certain space allocated in my uplink in order to send the polling request right, see even for you to send me a response I have to give you bandwidth right, it is not an automatic request reply scheme because on the downlink I can poll you, but on the uplink is when you have to send me the response okay, so that is why it is called real time polling service, FTP would be called non real time polling service okay, NRTPS because it does not matter, see what is the difference between real time polling service and non real time polling service, in the real time case I need the service within a certain deadline right, I need the service, so I need it, I need you to do the allocation in the next frame or in the frame after that, whereas in the FTP case it is a non real time polling service I can wait for a while as long as I get the amount of bandwidth that I am requesting I can wait for some time, so that is the difference, what is the HTTP called best effort okay, so frequency division duplex means I will use one set of frequencies for downlink, one set of frequencies for uplink, time division duplex is what we just saw okay, it supports both full duplex and half duplex stations, it also has a notion of adaptive profiles which we would not go into okay, so this is the example of time division duplex okay, there is a downlink subframe and there is an uplink subframe, this boundary between downlink and uplink also can be moved okay and the MAC is connection oriented, channel access is decided by the base station okay, so there was one question which I forgot to ask okay, once I have come up with schedule okay, once I have decided that on the uplink you will transmit for 3 slots, you will transmit for 2 slots, you will transmit for 5 slots right, how do I convey that information to you, so every downlink I start with sending a beacon right, what do I put in the beacon, just like in GSM know you have your broadcast control channel right, so or even in any other system you will send a beacon, so in the beacon you will say okay this is my BSID and I will also give you the allocation, so everybody has to read the beacon, in the beacon I will say that okay you go first, you go second, you go third, you speak for so much time, you speak for so much time, you speak for so much time okay done, so that beacon or that allocation is called a map, yeah one minute I will just come to it okay, so that allocation is called a map, so this is called a UL map, UL map stands for uplink map, DL map stands for downlink map okay, so the UL map and DL map are both transmitted in the beginning of each downlink frame okay, so now the question is what about allocation requests, what do I do with the request, so the UL map simply says that I am giving you 2 minutes to talk to me right, it does not say that you can say only this or only that in those 2 minutes, so moment there is an allocation in the UL map you can send a request or you could send a data, is that make sense, if a new node arrives in that area then what will happen, then the new node has to be configured to understand the base station okay and it has to send its first packet in the best effort part, see connection establishment is always in a best effort mode okay, so there will be some part which is left for best effort traffic okay, so see this is how it happens, so first you have the initial maintenance opportunities, do not worry about that, so then you have request contention opportunities okay in which new stations could send their requests okay, then you have the scheduled data for each different SS, so now how much time I give to each of the SS is totally defined by the scheduler that is being used at the base station okay, so scheduling in Ymax is a good area for you to look at as you know potential masters problems or PhD problems in case you want to look at that okay or I routinely give these problems to students as masters problems okay, so just to quickly recap, so in the preamble there is a beacon which has a DL map and a UL map okay, the DL map defines what is going to go on the downlink, why is the DL map required, everybody is listening to it anyway, so downlink is anyway a broadcast right, why do I still have a DL map, suppose I were to say that I am going to teach topics X, Y and Z okay and you already knew topic Y, then you can sleep during that time right, so the same thing, so if I announce that this is the DL map I have data for X, Y and Z, then some ABC for whom I do not have data can sleep during that time okay, so the DL map is used for power save, so I save power by announcing in the who all I am going to transmit to in the downlink right and the uplink I say who all are going to transmit back to me, so within each you have a notion of a scheduled system okay, so let me just quickly summarize here okay, so the subscriber station may request bandwidths in three ways, one is using the contention request opportunities which is the polling thing or it can send a standalone message called a bandwidth request or it can piggyback a bandwidth request message on a data packet okay, any of the three ways, this is fairly straightforward, there is no complexity involved here okay, base station may grant bandwidth in one of two modes okay, either it may grant bandwidth to an entire subscriber station okay, in the subscriber station there may be 10 different connections okay, you know just like think of it as n windows which you opened on a machine, so you may be granted bandwidth for the entire machine or you may be granted bandwidth for a per connection basis okay, mostly what is done is grant per connection okay, because grant per subscriber station gets a little bit more complex in order to maintain the QS requirements okay, grants are notified through the UL map okay, so here is an example okay, you have a base station, you have a couple of subscriber stations okay, first thing base station allocates bandwidth to the subscriber stations for transmitting the bandwidth request right, SS1 transmits the bandwidth request, SS2 transmits the bandwidth request and then base station allocates bandwidth to the subscribers for transmitting data based on their bandwidth request, is that making sense, this is a basic bandwidth grant protocol okay, then they transmit their data and everything works out after that, let me do this example okay, so let us say we have total uplink bytes that are available are 100 bytes okay, there are two subscriber stations and one base station okay and the demands on the subscriber stations are like this okay, subscriber 2 has UGS flow requirement of 10 RTPS 10 NRTPS 15 best effort 20 right, now if all these numbers add up to 100 then what happens, there is no problem right, life is very comfortable, we can easily satisfy all the demands for both the subscribers okay, so unfortunately it does not add up to 100, so that is when we have the problem of figuring out how much allocation to give to each flow okay, so this is the key way in which to understand 802.16, base station is going to compute is it is going to compute the total demand per flow okay, so see the difference here, it is not the total demand per subscriber station, it is the total demand per flow that is being computed because you want to give quality of service on a per flow basis, you say that okay UGS I am going to give 30% of my bandwidth I am going to give to UGS 20% to something else and so on okay, so that is what it begins with, it says okay, so the proportion that we are using here is 4 is to 3 is to 2 is to 1 okay, so in the first round I give 40 bytes to the UGS flow, 30 bytes to the RTPS flow, 20 bytes to the NRTPS flow and 10 bytes to the best effort flow okay, then what happens, you find that the UGS flow total demand is only 30 correct, whereas you could allocate 40 to the UGS flow okay, so you have an excess supply here of 10 bytes okay, so that 10 bytes is all that you can okay, that 10 bytes and you have an excess supply of 8 bytes here okay, so you have 18 bytes extra which you have to now distribute among these two guys, so in the next round what you do is you say excess bytes is 18, I am going to do a 2 is to 1 proportion, so here I am going to give 12 extra bytes and the best effort flow for some reason it has gone there, I will give 6 extra bytes okay, so where does it come to, now I have 32 bytes allocated for the NRTPS flow right and the total demand for NRTPS is 30 right, so I still have 2 bytes excess left over in this flow, so in the next round what I am going to do is, I am going to see that there are 2 bytes excess in this which are now allocated to the best effort flow okay, so the first thing that is done in .16 network is allocation on the basis of flows okay, once I have done the allocation on the basis of flows then you go back and see that which station should get how much okay, now it turns out that I have satisfied all the UGS flows, so both of these are satisfied, it turns out that I have satisfied the RTPS flows and the NRTPS flows right, so the best effort flow my total demand is 50 and the total allocation that is possible is 18 correct, so now what do I do? I can do it in many ways, I could either say that 9 of this will go to SS1, 9 of this will go to SS2 or I can again give this 18 in 3 is to 2 proportion depending upon the demand okay, so the standard does not specify how to do the best effort allocation, so this is just one example okay, so finally SS1 allocation can be 20 plus 12 plus 15 plus 9, SS2 allocation can be this okay, is that clear? So this is one of the key intricacies in .16 okay, then now there are lot of variants of this scheduling which you know you can take up with your students alright okay, so this is the summary, Ymax supports different applications with different types of queues requirements, does adaptive scheduling, initial deployments are underway and it is a fertile area for studying the interplay of scheduling algorithms with PHY and MAC okay, so these are the references, many of these slides and some other slides are also available on my web page okay, any questions?