 We've said that we've covered a lot of details of how to get data across a link, and the last topic on networking and protocol architectures, we started to introduce some of the concepts and the ways to design networks, or especially the way to what is a protocol architecture, and we mentioned something called the TCPIP five layer protocol architecture. Physical layer, data link layer, network, transport and application. Now, I know that some of those concepts don't make much sense yet. The idea is now we'll start looking at the technologies, especially regarding the network layer and then transport and application, and hopefully by the end of the semester that five layer TCPIP model will be much more clearer to everyone. So we're going to look at networking now. How do we send data across a network? And we'll talk about switching and what is a switched communications network. So let's introduce that. So so far we've focused on transmitting data across a link. Now how do we use networks to connect multiple devices, multiple links, and get data across them? Well, those networks we refer to as communication networks, in particular switched communication networks we use switching. To explain that, let's use this example. In this example picture we have six, what we'll call stations. Stations are those devices that produce data and consume data. So the six devices A through to F on the outside, some PCs, a mainframe, a server, in this simple small network are our example of our end user devices. The devices that are going to create data and receive data. And in general we'll refer to them as stations in our network. Sometimes later we'll refer to them as hosts. And with six stations wanting to communicate with any other station, we want to allow any station to communicate with any other station, how many links do we need? Not in this network, but if I want to have links direct between every pair of stations, how many links do I need? Just six stations you want to connect with every other station. How many cables would you need to connect those six stations? For example, A needs a cable to B, C, D, E and F, so A needs five cables. Not on this picture, but if we want to connect directly, B would need cables to five other stations, but we've already counted the cable to A. So B needs another four cables, that's nine. C would need another three cables, that's 12 plus another two from D and one from E. We'd need 15 cables to connect six computers together so that anyone can communicate with any other. What if you have a hundred computers, how many cables do you need? The first computer needs 99 cables connecting to the 99 other computers, so there's 99 so far. The second computer already connects to the first, so it needs another 98 cables to connect to the rest and so on. It's about, what, 5,000 cables, 100 times 99 divided by 2. So if we have a network of 100 computers and we want to connect them direct to each other via links, with 100 computers you need about 5,000 different cables in the network and each computer needs 100 or 99 different cables plugged into it. Not possible, not feasible. If you grow that to a thousand users, then you now need 500,000 cables to connect them all together. So we don't connect the end stations directly to everyone else, we connect them via some intermediate devices where those intermediate devices will have the role of forwarding our data on to the destination and that's what we get in this picture. We have the stations, the ones which are going to create data and consume or receive data on the outside and we introduce some intermediate devices in the middle, these green boxes, which we'll call a switching nodes or nodes or switches. Our role is not to create data, not to receive data, but just to forward data on the behalf of the stations and the way that they'll forward the data is that they'll need to make a decision where to send it to reach the destination and that will liken to a switch where we can choose between one or two or one or three different possible outputs, we switch between the different outputs. So in this example we have seven switching nodes in our network and this network allows our six stations to communicate with any other station, we don't need cables between each pair of stations, instead every station has a cable into a switching node, just a single link and then the switching nodes are connected together, just in this example they're not all connected and if A wants to send data to F, for example, one approach, A sends it to four, switch four, switch four sends it on to switch seven and then switch seven sends the data to switch six, which sends it then on to F. A's the source, F is the destination, four, seven and six, these switching nodes simply forward the data on and that's how we build networks, that is we use this concept of the intermediate devices, the switching nodes will forward the data on but there are two or three different approaches that in the way that they deliver that data when they forward it on which we'll look about in this topic. We call them switches, for example, switch four receives data, that data coming from A may go in this direction to one, to five or to seven, you can think when it decides it's like a switch, each of the goes this direction or we switch to this direction or to this other direction, okay, so we can choose between one of those possible output links. In this network, in this example and common in most networks, the links between the switching nodes or the switching nodes are not fully connected. What I mean by that is that the switching nodes do not, in the same as the stations, do not have links to every other station, the switching nodes do not have links to every other switching node. Seven only has links to four and six, it doesn't directly link to five, one, two or three, so we say this is a partially connected network, not fully connected. We will see examples of the switching and communications network in many different technologies today, but mainly in covering large areas, what we refer to as wide area networks. For example, you want to, you have a company which has offices in different cities and you want to connect those offices together across the entire country, so maybe what you would do is you would deploy or build your own network of switching nodes where the stations represent offices, which generate data and consume data and the switching nodes cover, say, Thailand and have links across the country to connect the different offices in different cities. So often we're talking about large-scale networks across a wide area and therefore having links between every pair of switching nodes doesn't make sense, it's not cost effective because, A, we don't need a link between every switching node to allow everyone to communicate and B, it's costly. For every link we deploy or build, it costs some money to deploy and maintain. So this is a partially connected network, it's not fully connected. We refer to the outside devices as stations and the internal ones are switching nodes and together we get a communications network, in particular, switched communications network. If you want to send from B to D, where are you going to send? What's going to happen to send data from B to D? B to D, where are you going to send? 1, 2, 3. B to switch 1, then on to switch 2, then to switch 3 and then D. Why did you choose that path? The shortest path. There are other paths, we note. It could go B, 1, 4, 5, 3, D or go around 1, 4, 7, 6, 5, 3, D. Some people like 1, 2, 3 because it's the shortest in terms of the number of links. And that may be a common technique for choosing which path to use. The point is that we often or we desire to have multiple possible paths between a pair of stations. So if you look at every pair of stations, there are multiple possible paths. And therefore when we have multiple paths, we need to choose one of them. And the process of choosing a path we'll call routing. Choose a route. You choose a path to drive from here into the center of Bangkok. There are different possible paths and you may choose based upon different criteria. We will cover that in the next topic. How could we choose the path? Today let's just assume that we'll choose one of them. It could be the shortest path or we'll see in the next topic maybe the path, the links have different data rates. Maybe the path 1, 4, 5, 3 will give us better performance. So we would choose that path. But choosing the path is the next topic of routing. Let's assume we've got some way to choose a path. We would often build our network such that there are more than one path between a pair of stations, like in this case. The good thing for having multiple paths, say from A to F, I can go 4, 7, 6 or 4, 5, 6. If switching node 7 fails, there's an error at this switching node, then I can use the alternate path. Or if switching node 7 is slow, everyone else is sending data through, maybe everyone's sending data through switching node 5 and it becomes slow. Then maybe I want to avoid that and send my data via a different path. So that's why we'd like to have multiple paths. It can give better performance and more reliability in the event of failures. The same as when you drive into Bangkok, there are different paths to take. Sometimes you'll take a path to try to avoid congestion, avoid a traffic jam. And we may use those techniques in networks as well. Choose a path throughout data to avoid where the network is congested. But that's the next topic of routing. So in general, we build a network using intermediate devices we call switching nodes and they forward the data on to get to the destination. This is a small example. If we imagine that we have more stations, hundreds of stations, thousands of stations and more nodes, more switching nodes, it will be more complex network. Let's consider, think about the links between the devices. Commonly, the links from the stations to the switching nodes are dedicated to carry the data just of that station. That is, the link from A to 4, that link needs to carry data that A generates. When A wants to send something, it needs to go across this link. And when others want to send to A, it also needs to go across this link. This link carries the data only for station A. Data from F to B does not go across this link to A. So we say that this link is dedicated for A. But the links between the switching nodes often carry data from different users, from multiple users at the same time. Imagine switching node 7 is not here. Let's say it's failed, we can't use switching node 7. Station E wants to send data to station C. F wants to send to B at the same time. So both stations send data to switching node 6. At the same time, switching node 6, if 7 is not there, the only choice switching node 6 has is to send to 5. So this link from 6 to 5 must carry the data from both E and F at the same time. How do we carry the data from two users at the same time? What was the topic yesterday? How do we carry the data from two users at the same time across a link using, what's it called? Multiplexing. Remember multiplexing is the approach of, we have a link, we want to carry the data from multiple users at the same time or multiplexing with FDM or TDM allows us to do that. The links between switching nodes often use multiplexing. F transmits data to 6, E transmits data to 6 at the same time. Multiplexing is used to send that data on to 5. Either frequency division multiplexing or time division multiplexing. We either give the data or the signals different frequencies so that we can send them at the same time or we switch in time slots as to send a little bit of data of E and then F then E then F and so on. So the links between switching nodes often use multiplexing and that often means that the links between switching nodes require higher capacity than the dedicated links to the stations. For example, again, switching node 7 is not there. It's failed. The link from E to 6 has a capacity of 1 megabit per second. The link from F to 6 has a capacity of 1 megabit per second. They're paid for those links and they want to use them fully. What should the capacity of the link 6 to 5 be? What's the capacity of link 6 to 5? Have a guess. E to 6, we need 1 megabit per second. F to 6, we need 1 megabit per second. What about 6 to 5? 2. I want to carry the data of both users at the same time. If A is sending at 1 megabit per second and F is sending at 1 megabit per second, if we want to carry that at the same time from 6 to 5, then 6 to 5 must be able to send at 2 megabits per second because the total coming in is 2. One coming in plus another one so we'd like to be able to send at 2 megabits per second out. Generally, the capacity of the links in between switching nodes may be much greater than that of the dedicated links to stations. Extending this example, let's say there were 100 stations connected to switching node 6, all using links of 1 megabit per second. 7 is not here. There are 100 stations here. What should the capacity of 6 to 5 be? Everyone yell out the answer at the same time. 100. That is, if we have 100 stations wanting to send at 1 megabit per second and all of that data needs to go through that one link from 6 to 5, then to carry it all at the same time, this capacity should be 100 megabits per second. If it's not, if it's 50 megabits per second, I try to send in a total of 100, but only 50 comes out. Where does the other 50 megabit per second of data go? So we need to have the capacity of that output link to be at least the same as or exceed the sum of the capacity of the input links. But not always. Coming back, there are 100 stations having links at 1 megabit per second to node 6. But sometimes these stations are not always transmitting. If you think these stations are the end user computers, I have a link that is a capacity of 1 megabit per second. But sometimes I'm not sending anything. Imagine your web browsing. You click on a link that triggers you to send some packets. You use the capacity. But then you get the web page back and then you spend two minutes reading the web page. During that two minutes, you're not sending anything across the link. So for some applications and users, even though the link capacity may be 1 megabit per second, we may not use that all of the time. So what we can do in large networks to save the cost of these links between switching nodes, look at the statistics of how often people use that link. If we have 100 users connected into switching node 6, and each of those users on average use it 60% of the time, then we could set the capacity of 6 to 5 to be 60 megabits per second. On average, not everyone is sending. So if we're lucky, they'll be sending at different times and we can deliver all of their data from 6 to 5. So we can take advantage the fact that with multiplexing, the output link from 6 to 5, the capacity can be lower than the summation of all of the input links and we can still get good performance. We'll finish today's lecture with an example returning to that. Let's go back aside and see what we've missed. We're introducing switch communication networks. Data is transmitted from source to destination. Those devices are called stations and they go via switching nodes and the collection is referred to as a communications network. The node to station links usually use dedicated point to point links. So the data only is that of the station. From A to 4, the data is dedicated for station 4 only. But the node to node links often use multiplexing. We carry the data of multiple users at the same time. The network often is not fully connected. In this example, not all switches connect to all others. In fact, very few connect to more than 2 or 3. Most connected 2 or 3 in this case. 2 of the 7, 3 of the 7 and so on. So we don't fully connect the switches. If we want to fully connect the switches, we can. It gives us more paths, but it costs more. More links are involved. So generally, we don't. We still get sufficient paths with this partial connection. It's desirable to have multiple paths between each pair of stations. If one path fails or is performing poorly, we can switch to the other path. So the question that is, what do these switches do? How do they get data from A to F over this path? The question of which path to take, we'll cover in the next topic on routing. Today, we'll look at the techniques of how do we, when we get send data to 4, how does it deliver on to 7? And there are two general approaches. They're called circuit switching and packet switching. So we'll introduce them today. Circuit switching and packet switching. Starting with circuit switching. When was our lecture? Less than 24 hours ago, our last lecture. So one question, when did the internet start? What was the answer yesterday? About what year or what decade did the internet start? 1970s, 1960s. So late 60s, early 70s. So the internet is one example of a switched communication network and it actually uses packet switching. We'll see that at the end today. What about some networks earlier than the internet before the 1960s? What is some of the one of the main communication networks we use? What's this? A telephone. Okay, not the mobile phone. The landline telephone that you have at home or maybe you used to have a home that connects via the that cable into the wall and goes out of the telephone network. How long have telephone networks been around for? Maybe for more than 100 years, in fact. Okay, so the start of the century, they have become popular that the fixed what I call landline telephone networks have been around for a long time. And they are example of communication networks. And they use a concept called circuit switching. When you pick up your telephone, well, no, when you pick it up, you dial a number to call someone, what happens? It uses circuit switching to connect. And the role of a circuit switch in the old telephone networks was to connect different telephone lines together. And this is in the very old cases, what a circuit switch would do. You would pick up your phone. Even you didn't have to dial in a number. You just pick up your phone. And you're directly connected to the operator. And you talk to one of the operators and you say, I want to talk to Steve. I want to call Steve. Your telephone line from your home has a cable going into the operator's device here. So there's a cable coming into here. And when you say to them, I want to talk to Steve, what they do is they look at this, it's hard to see, they look at this device and they see, okay, here's your telephone line. Here's the link to Steve's telephone line. They take a cable and connect them to together. They plug it in to yours and the one into mine. And that essentially connects the line from your telephone into this office or this telephone exchange. And then it connects through to my telephone line going to my home. And then it's a bit more complex. It rings my phone. And then when I answer, we can start talking. And when we talk, the data, which is our voice, travels from your telephone line across the telephone line to the exchange. And the exchange inside there, it goes across this cable and comes out and goes to my telephone line and is heard by me at my telephone. This is the role of a circuit switch. When we want to connect two entities to communicate, we establish a circuit connecting those lines together. This is the manual approach in the telephone networks today, it's done, but it's automated. So use digital switches. So basically computers do that now. When you type in a press the buttons for a telephone number, your telephone sends a special signal to the telephone exchange, which connects your telephone line to the destination's telephone line. And a special signal goes to their telephone, it rings their telephone. When they pick up a signal comes back, connecting us all the way through. So that's the role of the circuit switch. The idea is that we want to create what looks like a single link between source and destination, a dedicated path, as if we did have a cable from source to destination. We don't, it goes via the intermediate switch, but we connect them in the switch such that we can send the data straight through. So we create a dedicated communications path between two stations. A path is simply a sequence of links. If you have two links, then the path consists of those two links. And by dedicated communications path, we will see a little bit later that it means that the resources of those links are reserved for our communications, dedicated for us, just for us for talking, not for anyone else to use. And we'll see that as a consequence for different applications. Coming back to the telephone, how do you make a phone call? Pick up your phone, press some buttons, pressing the buttons triggers your phone to send a message to the local telephone exchange. Your home has a line that goes to a local telephone exchange, just one dedicated link. So think you're the station, and that station has a link to one of the switching nodes, the telephone exchange. And when you press the buttons of the destination's phone number, that message goes to the local exchange. And the local exchange recognizes where the destination is based upon the phone number and creates a connection to the destination's telephone line. Maybe, for example, here. Here it's called an end office. But I would simply call it a telephone exchange. Usually there's one in different neighborhoods across a city or a suburb. So let's say A wants to call B. A picks up the phone, dials in the number for B. A has a link to the telephone exchange. When they dial a number, a special message is sent to the exchange saying A wants to call B. The exchange recognizes, ah, B is directly connected to my exchange. B also has a link into this exchange. So I will forward that message on to B. It will go down this link. It will be received by B. And that will trigger B's phone to ring. B's phone is ringing. When the person picks up the phone, that triggers a message to be sent back. A message comes back to the telephone exchange saying B has accepted the call. And that triggers the exchange to create a dedicated connection between these two links. In the same way that the operators did it in the old days, a computer can create that link. And finally the message is sent back to A. And at that point in time, the tone heard by A changes for waiting to now they can talk. Then they can transfer data in either direction. A can talk and the data is sent across the link through the exchange to B. And when B talks, it goes back in the opposite direction. And when they're finished talking, one of them hangs up. That triggers another special message saying to the exchange, let's disconnect. Let's close the connection. We've finished the data transfer and the exchange removes this link between the telephone lines. So that's the concept for a telephone call. There are three phases. We establish the connection or what we say we establish a circuit. Think we have a circuit all the way from A to B. The first phase is to establish that circuit or connection. The second phase is to transfer the data, talk in the telephone. The third phase is to disconnect. When you hang up, it disconnects the circuit. So this is what's used in telephone networks and also some other data networks, not just for telephone voice calls. We establish a circuit and in doing so, it's from station to station. All the resources along that path from station to station are allocated for that circuit. And that will come up shortly. We'll see what's the significance of allocating resources. Once we establish a circuit, we transfer our data. In voice, it can be an analog signal being sent and we can have digital data communications as well in a circuit switch network. And when we finish, we disconnect or terminate the circuit. Deallocate the resources. Back to our telephone network example. If C wants to call D, the same process takes place except when C dials in the phone number for D and that special message to establish the circuit is sent to the local exchange. This local exchange recognises D is not on my exchange, D is somewhere else. So the local exchange then sends a message via this other link to an intermediate exchange, maybe covering the city. And then that goes back to the local exchange of D and they establish a circuit still from C all the way through to D. Once we've done the circuit establishment, we set up a connection. It's as if we have a cable going all the way through the intermediate exchange back through to D. And then when we send data, we're into the data transfer phase, A sends a signal. You can think the signal travels along this line through the local exchange, through the trunk line, the intermediate exchange, through the local exchange of D and then across the telephone line of D and D receives that signal. As if we have one long cable from C to D. That's the idea of circuit switching. Connect two stations via one cable. It's not a real cable. We go via these intermediate devices the switching nodes. Circuit switching is, so we're only going to cover these at a very high level. We're not going to go into details of circuit switching or packet switching, just introduce the concepts. Circuit switching is used, of course, in public telephone networks, still used. So the home telephone networks still use circuit switching. It's been used in private telephone networks. Say within the university, we have a private telephone network where the phones in the offices are connected together. They do not necessarily go out to the outside world. We're just free local use run by the university. That can also use circuit switching. And some private data networks have used circuit switching. And a good example was banks. A bank, the company, has many offices through cities, branches, and ATMs now spread across the country. And all of these offices, branches, and ATMs need to communicate data, like transactions that take place. For example, you make a transaction on the ATM. The ATM needs to send some data back to the office to reduce your bank balance and so on, to record the transaction. So it was common for banks to build their own networks, connecting the branches, the offices, and the ATMs together, and circuit switching was commonly used in the past. So what would happen? It's not a phone call from the ATM to the office. It's a circuit switch connection and they send data, the transaction records and so on. So even though we're using the example of telephone networks, it's used for non-telephone-based applications. Let's, on this example, let's say that there are 100 people on this local telephone exchange. There's not just three telephones, there are 100 telephones. This trunk line from the local exchange to the intermediate exchange. This is another link which would carry the voice calls of the users on the local exchange. What's the capacity necessary for the trunk line? 100 users on the local exchange, what capacity should the trunk line be? And you may count in terms of voice calls, not bits per second or hertz. How many voice calls should we support on the trunk line? It depends. That's a technically correct answer but not very useful. And what situation would it depend upon? Okay, so it depends upon what the users want to do in terms of how often they want to call and how long they may call for. So the trunk line here has to support multiple phone calls, that's one thing. So we'd use multiplexing here. So when C wants to talk to D, the trunk line needs to support that one phone call. But maybe there's another one. E wants to talk to F down here. So the trunk line needs to support that second phone call as well. So the trunk line would use multiplexing and should have sufficient capacity to support the voice calls of the or to support the expected number of voice calls that happen at the same time from this local exchange going out. If there were 100 users here, then there's a maximum chance of 100 voice calls going out to the intermediate exchange. We can't have more than 100 voice calls if there's only 100 users. They can only have one voice call per phone, per line. So if there's 100 users, we could have a capacity of the trunk line to support 100 calls at the same time. But it turns out most people aren't calling other people at the exact same time of the day and they may not be calling out via the intermediate exchange. They may be calling locally. So the people who design the telephone network would look at the statistics of how often people call where they call and how long and would dimension the capacity of this trunk line to try and allow as many calls as possible but keeping the capacity as low as possible. Higher capacity means higher cost. So maybe they dimension it to support 30 voice calls at the same time. Even though there are 100 users, hopefully not all more than 30 are calling at the same time. If the capacity of this trunk line supports 30 voice calls at the same time and there are 30 people using it, making a call, and then the 31st person tries to make a call, what happens? You may have experienced this. If there's the capacity is to support 30 voice calls and then there's already 30 voice calls taking place, the 31st person who dials in the number to go out via that trunk line would get some special message back saying the network is busy. Now it's slightly different in mobile phones today but you may get the same concept sometimes. Maybe New Year's Eve, you may get try and call someone. They are not busy, the station is not busy but the network cannot support your call. It's not, the other person is talking, it's the network, in this case the trunk line cannot support more calls. Let's consider an extended example of that. We'll draw a simple switch network, some stations A, B, and C, a switching node S1, another switching node S2, and to keep it simple they want to call some other stations D, E, and F and let's link them together. Here's our simple switching network. We have six stations A through to F and just two switching nodes and say for a telephone network the link from A to switching node S1 supports one voice call. We can only make one voice call at a time. This is our telephone for example and D is our other telephone. Let's say, let's consider the capacity of this link. So the capacity of these lines we can think is one. That is we can make one voice call across this line. We only have one telephone attached and similar for these other links. The capacity is one. One voice call or one unit. The line between S1 and S2, if we want to for example allow A to call D, B to call E, and C to call F, what should the capacity of the link from S1 to S2 be? What should it be? Just focusing one direction. Three. We need to be able to support three voice calls at the same time. When I say voice call here, think it's going, we can send voice in one direction or back as well. So we should have a capacity of three here and if we do, then A can call D, B, E, and C, F at the same time. What if the capacity is two? We want to save some money. The more capacity we set this link to be, the higher the bandwidth, the higher the data rate, the more it costs. And it turns out that maybe they're not always calling each other. So if we have a three, then it could be wasted. So what if I set it to two? Actually maybe we'll set it to three first and then just highlight one other point. Let's set it to three. That was the first suggestion. Let's say now A makes a call to D. A makes a call to D. No one else has made a call yet. So when A dials the number of D, it sends a special message. Maybe you don't draw this. It will get a bit messy if you draw it, but the green line that is, sends a special message to the switching node one and switching node one checks. Do I have enough capacity to connect to S2 for this voice call from A to D? Currently no one's using this link. The capacity is three. So yes, we have enough space for one more call. So yes, the message, it goes out. The message goes to S2. S2 checks is D busy. No, no one's calling D. So the special setup message is sent to D. When D answers the phone, they pick up the phone, then a message comes back informing the switches that this connection is accepted. And a message gets back to A and then A can start transferring data. So we establish a circuit or a connection from A to D, and then we start transferring the data. Our capacity of a link from S1 to S2 is three. We're currently using one of that three. And let's say during the day, no one else makes a call and then A hangs up. So we disconnect that circuit. Then one thing we care about from the network operators perspective is how efficient are we in utilizing capacity? Do we use the full capacity? In this case, the capacity is three. We're using one out of three. If no one else makes a call, we're only using one out of three. It seems like a waste. Why not I lower the capacity? Have a cheaper link from S1 to S2, so I don't pay so much for that link? Because there was only one person making a call at a time. Maybe later, they hang up and then B makes a call to E. Still only one of that link from S1 and to S2 is used. So if not many people may calls at the same time, that capacity from S1 to S2 may be wasted. We set it to three, but we don't use three much of the time. So that hints maybe we should lower it. We want to save some money. Let's lower the capacity down to down to maybe two. We'll draw that again. The links are still one, but we set this one to two. Let's see what happens. A wants to call D. It sets up a connection. There's enough capacity on the link. It goes to D. They're talking. B wants to call E. Sets up a connection. There's enough capacity on the link. Now C wants to call F. Here we have a problem. C wants to call F. They send a special message to S1 saying I want to call F. S1 checks. My link supports two voice calls at the same time. It's currently fully utilized. I cannot support any more. I don't have enough capacity. So here S1 would send a special message back to C saying your request has been rejected. The network's busy. You cannot connect. So something would come back saying no you can't make a call to F at this point in time. You have to try again later. So this is just highlighting the trade-off here with circuit switching. Whenever we establish a circuit we reserve the resources for that circuit for the duration of the circuit. That is when A connects to D, this link from S1 to S2, we reserve enough resources for one voice call while they have that connection in place and similar with B to E. What that means if the capacity was to and it's currently used no one else can make a voice call. No one else can use that resource. Well that's a good thing except from C's perspective they cannot send any data. If we increase the capacity of this link to 3 then C could send data but it costs us more. And especially wasteful if sometimes most of the time only one person wants to call. Only very few times do all three want to call. So by setting the capacity to 3 we allow all three to call but we waste the resource most other times. So that's one of the trade-offs with circuit switching. We reserve the resources for the individual connection. That means some resources may go wasted. Let's extend that. Let's not think of voice calls but think this means data rate, bits per second. We use circuit switching for our applications. So A your computer is connecting to a server and transferring data. At the same time B connects to E to transfer data. The blue numbers are the capacity of the links but sometimes we're not transferring data all of the time. Even though the link capacity is one and we reserve one at some point in time maybe a sending maybe 0.7 we reserved one megabit per second from A to S1 from S1 to S2 and S2 to D we reserve using that circuit one megabit per second similar from B through to E another one megabit per second but maybe A doesn't have so much to send at this point in time is only sending 0.7 and similar at the same time B is only sending 0.1. It's got no data to send. Should we let C create a connection to F? The capacity of the link from S1 to S2 is 2. Currently A is sending 0.7 megabits per second or whatever units we want to use. B is sending 0.1 that is a total is 0.8 going across the link from S1 to S2. Capacity is 2. Currently in use is 0.8 out of 2. Should we allow C to connect to F? Yes or no? Hands up for yes. Should we allow them? If C wants to create a connection of one megabit per second hands up for yes. Let's see transfer their data. Is it a good for C? If you are C would you like to transfer your data? You would like to? So we would think it would be nice if we could at this point allow C to transfer data to F because this link is going unused. We've got a capacity of 2. We're only using 0.8 out of 2. Less than 50%. Why not allow someone else to use it? Well circuit switching does not and that's one of the disadvantages of circuit switching. It's what we reserve that counts. We've reserved one out of two. B has reserved one out of two. There's nothing spare. Even if I don't use what I reserve no one else can take it from me. That's bad for C's perspective and bad from efficiency perspective. We waste the resource but it's good from A and B's perspective. They're guaranteed to get one megabit per second. They're currently sending 0.1 but a little bit later they increase their sending rate up to one. I'm guaranteed to get one megabit per second. So that's the issue with circuit switching. We can guarantee the users get a particular performance but if they don't use all of that what they reserve we become inefficient in using the network resources. We waste resources. That's not so much a problem in voice calls because in voice calls telephone what we reserve and what we send is usually about the same but it is a problem with many internet applications. Imagine I reserve one megabit per second to transfer data between my computer and a web server the SIT web server. Sometimes I want to send at one megabit per second. Sometimes I have nothing to send so that can be very wasteful if we have applications where the sending rate varies over time so we'd like to allow C to send. That may be a way to summarize or to finish on circuit switching and look at an alternative. This is just another example of a telephone network with multiple exchanges. Circuit switching. We set up a connection in advance and then transfer data. We reserve resources for the duration of the connection. The resources are the capacity of the link and even circuits in the switches or resources in the switches. This is good. The reservation of resources is good because it guarantees the quality for the users. Guarantee data rate and delay but it's bad because for some applications if we don't use that capacity it becomes wasted so it depends upon the applications as to when it's the best. It's good when the applications always have something to send like voice. It's bad when the applications vary a lot in their sending amounts like web browsing and other internet-like applications. Circuit switching was designed for voice calls and that's what it's very good for. It's not so good for internet applications. Hence people designed an alternative approach that works better for internet type applications or data type applications and that's packet switching. In packet switching we want to overcome this problem of being inefficient if no one's using the link. We want to let other people use it if we're not using it currently and the approach we take our data that we want to send from one station to another, split it into smaller chunks which we'll call packets and send the packets one at a time and we'll see in some scenarios that will make it much easier to let others use the resources when they're not being used by us. So first we'll highlight the the structure of packets and how they are forwarded through the network and see how it can be more efficient. So each packet, let's say I have 10,000 bytes of data to send, I may break that into 10 packets of a thousand bytes each. The packet will normally have a header and the header would identify who is the destination of this packet, who is it going to and usually a sequence number. Packet number one, packet two, three and up to ten for example and other things but the header is usually needed in these packets. There are two approaches to packet switching. One is datagram packet switching and one is virtual circuit. Today we'll just cover datagram packet switching. Let's go direct to an example. We have our network, source station, destination station and the switching nodes, the green circles and in this case we have our data and we've chosen to split it into three packets, one, two and three. So the source station sends those three packets one at a time to the first switching node. The switching node looks at the packet, in particular the destination address in that packet in the header and decides where to send it. In this case the switching node has two choices, send it in this direction or send it down and again the next topic on routing will look at how we could make that choice, what's the best path. In this case it shows to send it to this station at the top and we send the packets one at a time and at some point in time for example packets one and two are going down here to the third switching node, packet three is still traversing across that second link. Where should packet three go? Well from this switching nodes perspective there's two options in this direction or come down following one and two. Importantly in datagram packet switching the switching node when it looks at a packet it makes the decision independent of the past packets. When this switch is about to receive packet three it doesn't care that one and two have gone down in this direction, it just makes a decision of where to send it based upon this particular packet, not based upon past packets. So what may happen is the switching node may send it in a different direction. Three ends up going up here. One and two take this path. Why would it do that? Maybe the switch decided that this path seems to be too slow, it's not performing well. Let's start sending packets across this path, maybe it will give us better better performance. So packets may take different paths through the switching network. As they take different paths some may arrive out of order. In this case packet three because maybe the links had a better performance gets there before packet one and two. So that's possible. So we have sequence numbers and those packets need to be put back together in the right order at the destination. So the last switching node then puts them back together so that even though I received three packets first I must deliver the data in order one two and three. So first the key points of datagram packet switching. We split our data into smaller packets, send them one at a time. The packets are treated independently by the switching nodes. They receive a packet, look at the destination address and decide where to send it and they may arrive out of order. Let's now try and see why this can be more efficient than circuit switching. Let's compare the circuit switching. We'll come back to this example where we saw with circuit switching we reserve resources through the entire path meaning even if we're not using it no one else can use those resources. So in this case the link from S1 to S2 had a capacity of two. We were only using 0.8 but still C could not use any of that link. Let's see how it works with packet switching. Same network and I'll try and draw the packets but they may not be very accurate. The idea is that packet switching works well when our sources have a varying amount of data to send. So A sends packets to D, B is going to send packets to E and C is going to send to F and there's no concept of setting up a circuit at the start we just send. So at some point in time maybe there's a packet this is my this is my picture of a packet being sent from A to S1 and there's some other packets. What I'm trying to illustrate is that at some point in time A has sent four packets to S1. Here's the first one there's a little bit of a delay before I send the second one then the third one then another delay before I send the fourth packet and at the same period B sent two packets to S1 and C sent four packets to S1. Why do they send different number of packets? They have a different amount of data to send maybe they're running different applications visiting different websites okay so applications running on A, B and C generates data but at different rates and not at a constant rate the time between packets may vary it's not one after the other there may be delay for example you're visiting a website at computer A you click on a link that triggers one packet to be sent a little bit later maybe you click on a link and it triggers two packets to be sent because the web page has the request is larger needs to be split into two so it depends upon the application as to the timing the point is let's assume the timing varies between users and between packets what happens at S1 as the packets arrive they are sent out so what may arise is that the S1 gets to send packets one after the other that is this packet arrives it's sent then shortly after a packet from B arrives it's sent and then the packet from C then the second packet from C they're sent and then the packet from A and so on arrives so even though the input rate is varying when we combine them together we have a common total and that can be sent out now let's put some numbers to this in terms of capacity coming back our capacity of this link was two the capacity of these links were one one megabit per second for example but the sending rate may not reach the capacity sometimes we're not sending so maybe here it's 0.4 at one point in time don't worry about where these numbers come from i'm just making them up for this example but we're saying that A is allowed to send at one megabit per second but at this point in time is only sending at 0.4 and B is only sending at 0.2 and C is 0.4 so in fact in this case how much is going between S1 and S2 what's going from S1 to S2 all the data from A, B and C come into S1 total coming in how much is coming from A to S1 so what's the total coming into S1 one total coming into S1 the total we can send out is two so we'll send out at one okay that is on average we're sending out one megabit per second we've got one megabit per second coming in we send out at one megabit per second so that's fine we get to send out everything that we receive all of the users data A, B and C is getting sent from S1 to S2 so that's a good scenario let's change these numbers what if they the A, B and C change the the sending rates they increase them what can we change them to sorry if you're copying that down let's just increase them a bit A let's say had more data to send B also a bit more and C was 0.8 here we have a total coming in of 1.7 megabits per second capacity out is two so we're still okay we can still send 1.7 out so all the packets that come in get to be sent out essentially as they arrive it's increase again they want to send more what happens now the total coming into the switching node S1 we have packets coming in at a rate of 2.3 megabits per second all of the stations A, B and C have not reached their capacity of one yet still below one but the total coming into S1 is 2.3 megabits per second but i can only send out two what happens with the the rest you think you have a pipe or a junction and there's a lot of water 2.3 liters per second coming in and it can only send two two liters out where does the other 0.3 liters of water go it splashes out here what happens our switch what happens with the packets well we can do a little bit better what we can do is at the switch is introduce a queue or some memory inside the switch some memory space that we allocate we've got on average 2.3 megabits per second of packets coming in we can only send two megabits per second out the remaining 0.3 we save in the queue and try and send them later we queue those packets put them in the queue with the hope that sometime later if A, B and C reduce their sending rate we'll be able to send those leftover packets out so what we do is in this switching node is to queue up the packets if we're sending in too much to exceed the output capacity the queue will increase the number of packets in the queue will increase but if maybe C drops down to 0.1 total coming in is 1.5 plus we have some in the queue so we should be able to send the ones in the queue out so here what we do to allow everyone to send assuming they're sending at varying intervals we can have our capacity of our link still at 2 but support in most cases support all three users under the assumption that the users don't always want to send at 1 if they always want to send at 1 this won't work what will happen C sends at 1, B sends at 1, A sends at 1 input is always 3 megabits per second we only send 2 megabits per second out so many packets go in the queue and eventually the queue fills up and when the queue is full the packets need to be dropped so this leads to drops of lost packets between the source and destination but if only sometimes we want to send at 1 but most of the time we send at less than 1 less than what we need or the capacity of this link then this approach works quite well sometimes our packets may be delayed in the queue we get some queuing delay but if these drop down their sending rates then they'll get to be sent and everyone will be happy in that the data from all three users gets to be sent across the link and the utilization of that link from s1 to s2 is quite high we only pay for a 2 megabit per second link the person who runs the network and on average we use that 2 megabits per second most of the time this can be better than circuit switching in circuit switching c would never get to send data a and b reserve the capacity c can't send anything this is better because it allows a b and c to send but it's worse than circuit switching in that if a b and c start to approach their upper limits of one then they'll get a higher queuing delay worse performance larger delay means worse performance for those end users so that's trying to highlight the trade-offs between circuit switching and packet switching the numbers are not important they're just trying to indicate the the idea we will not try and calculate what the rates will be how much is queued note that we've already mentioned queuing delay we've now got four components of delay transmission delay propagation delay which your experts at processing delay to do with the computer speed and now a fourth today queuing delay the time it takes your packets to wait inside the switching notes when is the internet the wi-fi the fastest no when is the internet the slowest at sit is it slow some days your access is access wi-fi is it is it fast enough for you not fast enough when is it slowest have you been here at 8 p.m it's quite fast at 8 p.m at lunch time it's usually slow at maybe early morning 9 a.m it's usually slow because everyone's accessing the network at the same time everyone is sending packets via their switching nodes the access points the other switching nodes and they don't have enough capacity to send out such that your packets get delayed a long time in the queue when there's more people trying to send more data the queuing delay will go up and the performance you'll notice maybe web pages are slower to to display but when less people are using it performance is fine so that's one of the reasons why you'll notice variations in the performance in a network this approach is used in the internet packet switching in particular datagram packet switching we will stop there so we've introduced briefly circuit switching and then datagram packet switching circuit switching mainly used for telephone networks datagram packet switching is what's used in the internet today and used in many new networks and has the advantages that can be allow everyone to send and can be more efficient in the cases where the data sending rates vary we'll look at one last one next lecture and then move on to routing how do we choose the path