 We've said most of the things I want to say about switching we've introduced the three three approaches Circuit switching and then within packet switching two approaches datagram packet switching virtual circuit packet packet switching So let's let's finish and summarize and a bit of a comparison about the differences between them Circuit switching we set up a connection or a circuit between before we transfer data And then we transfer the data commonly used in telephone networks Similarly in virtual circuit packet switching we set up a connection or a call at the start and then send the data But now we send packets and in datagram packet switching. We just send the packets. We don't set up some connection at the start Circuit switching let's let's compare them and and think about the advantages of circuit and packet switching What benefits to sending data as packets give us compared to just sending it as one large chunk as in circuit switching One of the key parts of circuit switching is that when we set up a connection we reserve resources through the network What resources will think of on the links the resource is the capacity So we reserve some of that capacity for our source destination pair So that's one resource reserve we that we reserve and Another is on the nodes to switching nodes themselves The resources there are there their memory their CPU So that when the data comes in they they allocate resources so that It's reserved for that particular source destination pair So we reserve resources in circuit switching and that's important and gives us a strong advantage in that Because we've got many users sharing the network When I set up my connection in circuit switching, I'm guaranteed to have those resources allocated for my data transfer No one can take it away once the connection is set up as long as the connection is set up It's reserved and allocated for me And we I think we had an example yesterday. We said with voice calls So say one link between switching nodes supports a capacity of a hundred different a hundred parallel voice calls When I set up my connection I reserve one One unit of that capacity Okay, so when 100 other people make voice calls using that same link Then we'd say that the capacity of that link is reached. So if a hundred people reserve Capacity for one voice call and the total support that supports is 100 then we've reserved the entire capacity Even if I'm not talking on the phone if I have the connection set up if I'm not talking not sending data That capacity is still allocated from my call No one can take it away. No one can use it even if I'm not using it and an analogy that people use is road networks think about Driving people drive we need to get people from A to B on the road in between two cities or two locations and What happens here in Thailand and in other countries is when a when a VIP needs to travel from A to B What happens? You know what happens someone important member of the royal family is some high-ranking politician needs to get from A to B What happens on the roads? Sorry They block the road. Okay, that is the police Some organization in advance. They know that we need to get these people from A to B What we do is we we stop other people using that particular road So that they can travel from A to B with no stops no traffic lights No other cars to slow down for they just travel at a constant speed all the way along the road Think of Let's say now consider the road has multiple lanes Say three lanes. Let's say we do that, but just for a single lane Okay, so the VIP gets one lane Allocated for them. Okay, there are still two lanes for us people to use We can use those two lanes to drive at any speed and go turn left and so on But one of those lanes is reserved for the VIP to drive on So this is the concept like circuit switching that is we've got this capacity available we reserve a portion of that say a Lane of the road for one source destination pair one set of users and Once that's reserved no one can go into that lane Even if that they're not using the lane at this point in time. Okay, they've driven past already With circuit switching the idea is that no one else can use that lane even if there are no cars in it So it's good for those users because they get to drive from A to B with no stops No other cars to slow them down No, no red lights somehow they they bypass the lights as well So they get from A to B with no delays along the way The only delay is depends upon the speed at which they can drive at and the distance So the transmission and propagation delay Even as they pass through the the intersections because the lights are set green for them. They just go straight through so it's good from their perspective but It's not good from the perspective of the road network because or in some cases If we reserve that lane for an entire day 24 hours Okay, so the police in advance reserve that lane. They put some cones on the there So no one can drive in that lane for 24 hours And then the the VIP drives along there takes them one hour to use that for the remaining 23 hours of the day no one can drive in that lane and That's a waste of the road network from the other from the perspective of the network that lane is unused That's very wasteful of the resources and that's the trade-off that occurs in circuit switching we reserve resources in this case we reserve for the portion of the capacity of the links and allocate resources inside the switching nodes for Data from A to B That's good for the data transfer from A to B. They get guarantee performance, but it's bad if They reserve those resources, but don't use them or only use them for a portion of the time because because they reserve no one else can use them and the Utilization we say of the network the amount that it's used is Is not the maximum we're inefficient. How do we overcome that with packet switching? Now imagine okay with packet switching Virtual circuit packet switching on our roads Imagine now we have our three lanes available. We have some intersections some stop lights think of this What happens if? There are many different paths to take Then to get from A to B you can turn left and go down a side road and come back So there's different paths to take with virtual circuit packet switching in advance the source sets up or decides the path they're going to take and possibly possibly even in Informs let's say the policeman standing at each intersection that we're about to drive from A to B via your three intersections okay, and What the policeman at each intersection could do is to give some change the signals on the lights that they get a green light to go through But with packet switching let's say now we don't have a whole lane to ourself our packets are our cars So there are multiple cars to get from A to B on Those three lanes of road. We still need to compete with the other cars on that road Okay, the cars that are driving in front of us that slow us down So there's still some delay there and in fact at each intersection We may not always get a green light and we may have to wait for the cars that going on the opposite direction And there will be delays at each intersection So with packet switching or the analogy to the road network think of individual cars now need to share the road We no longer have resources reserved for the entire path Which means that those cars don't get to travel at 120 kilometers an hour all the way through they have to slow down for Intersections they must slow down for other cars in their way So that's packet switching that the packets go as fast as they can but they may be Slowed down by other packets coming into switching nodes and sent across links But the advantage comes with packet switching is because okay we're sending We That in the early morning there are few cars on the road Okay, and in that case and then let's say a company wants to drive all their employees So all the employees drive from sit into the city on the road Then we can utilize that road for ourselves in that case As more people want to drive on the road What happens well the road supports their cars and Driving on those three lanes The issue that arises now is that at each intersection as people arrive at intersections. We start to get What some congestion? We start to get some delay because I arrive at an intersection other people have arrived So we must give way or weight and there's some delay there. So our packets may be delayed Or our cars are delayed But the benefit is that okay think of an intersection now at some point in time Maybe there's a lot of cars coming from one direction and few cars coming from the other direction And they all go out on some output road on in some third direction And we just let them all go on the third direction At another time there may be few cars coming from the first direction But more cars coming from the second direction because it's different time of the day different source Again, they all sent out on the output direction If we use the concept of circuit switching we'd need to reserve a lane for the cars With packet switching we don't reserve any resources We allow people to send their data at any time and we share those resources amongst them It's much more efficient When the amount that needs to be transferred the number of people in trying to drive on the road varies over time in sometimes There are a few packets being sent we can send them out and Sometimes there are a few packets coming from this direction many coming from this direction We can still send them out because we combine them and send them out The worst case is that there are many packets coming from one direction many packets coming from the other direction We can only send them out at some speed and what happens is that the packets get sent eventually But they get some delay waiting for those other ones Let's try and show that With an example you don't have but will I'll make it available later, but just watch and see the comparison between Circuit switching and virtual circuit packet switching and later we'll talk about datagram packet switching first a network squares stations wanting to send data and receive data circles are Switching notes Let's talk focus on the top This is the source. This is the destination. Let's just focus on this part via the two central switching notes. So For our example, we want to send data from here to here using circuit switching first Before we can send the data we establish a circuit or a connection So we send a special message from source to the first switching node saying I want to connect to this destination the switching node receives that message and records in in in the switch some information saying there's potentially going to be a connection a circuit from here to here So we record some information We keep sending that on to the next switch This is what we call a Different names for it. There's a circuit request or a connection request message We're trying to request to set up a circuit We send it through to the destination when the destination receives it It responds with a response and accept a response message saying I accept your connection this is like the process of Someone picking up the phone you call them when they pick up the phone it sends back a response message and As the response comes back this confirms the connection or circuit needs to be set up So the switch is now confirmed and inside the switch They connect This line this input line with this output line Remember our network has multiple lines coming in of just hidden the other links But because of this circuit setup we now from the switches perspective Connect anything that comes in from this link is going to be sent out on this link And anything that comes into the second switch on this link send is sent out here if it's belonging to this pair stations That's the connection setup or circuit setup And we saw the picture of the operators the way they do that in the old style was that the lines were your telephone lines and What they do is essentially just connect them together with a separate cable in the telephone exchange now. It's just done in electronics So we've set up a circuit Now with circuit switching we just send the data and the the best way to think of it Is that we now have a link from source all the way through to destination? Because we have a link to this first switching node and that has in some some logic inside here a link To this output line which links to the second switching switching node which links in here So this link is really just some some electronics inside the switching node So if we transmit a signal from the source The idea is that signal will travel along this line through the switching node Along this line through the second switching node and arrive here As if we have one long long cable from source to destination So here we represent the data being transmitted. We start Transmitting some data It comes out of the source transmitted along the link to the first switching node When the first switching node receives it It simply passes it through to the second output link. Think of the data goes through the switch Very little processing in circuit switching It's just some electronics to say anything coming in here goes out here And it passes through to the destination and the destination receives the data Resources were reserved along by the switching nodes for this data transfer If we've set up this connection we've transferred our data the connection is still set up No one else can use the resources available They reserve just for this pair of source destination. No one else can make use of that spare capacity Let's look at virtual circuit packet switching same setup virtual virtual circuit packet switching we try to be like circuit switching We set up a connection first a similar procedure. We send a special packet and We respond saying yes, I accept your connection and The result is that we've informed these switching nodes that there's going to be some data transfer from the source to destination I've drawn it a little bit different here because the way that it's implemented is that Now we're going to send packets What each switching node will do when it receives a packet It will look in the header Inside of the header packet will be the source and destination and What the switching node did when we set up the connection is it made note? Any packet coming from this source Going to this destination If I receive it, I'm going to send it out on this line There's no Reservation of resources. It's just telling the switch if you receive from this source Send it out here and the second switch if you receive From this source to this destination send out here. So we now send our data But now we're sending it as separate packets. We don't if we have a large file to send We break it into multiple packets and send those packets one at a time Not shown here, but the packets will have some header information not just the data The circuit switching there was just data no header needed Let's consider a slight variation where Okay, the data that wants to be sent from source to destination Sometimes we have some data to send sometimes we don't so we don't continuously send packets Let's say I send a packet and Then a few milliseconds later. I send a second packet And then I send nothing and then a bit later. I send more Okay, so we don't have to always send packets Continuously and that's what I've tried to illustrate here. There's a bit of a gap Between the sending of the subsequent packets. I transmit packet one Maybe two milliseconds later. I transmit packet two and then a few milliseconds later packet three It depends upon the application Do we have a lot of data to send or maybe we have some and then nothing to send We'll see why that's relevant later The packets arrive at the circuit switch this circuit switch looks at the header of the packet Determines from the connection setup. I need to send it out here Importantly, there's some Compared to circuit switching there's some significant processing here There's some processing delay of the packet this the switch needs to read the header read the entire packet normally Determine where to send it and then send So there's some delay for the packet in here Depends upon the packet size it also depends upon how fast the switch can process the packet But importantly and not shown very well in here, but remember there are other links coming in here So at the same time there may be other packets coming into this switch So the time it takes to process this one and send it out depends upon how many other packets are coming in at the same time from other sources and destinations so There may be some significant delay for this packet and generally we refer to that as the queuing delay it may have to wait while other packets are being processed and That queuing delay can be Significant we eventually send it out the other packets come in we send them out one at a time and we deliver those packets But because there's some varying delay there may be some delay to send them out You see that the there's a bit of a gap Between when the first one was sent and the second one was sent. There's some space here We could think that from the perspective of this link We're we're not 100% efficient Because we're sending a packet and then we're sending nothing for some time And then we send the second packet and then nothing and then the third packet so for some period of time When we're sending data across this link Well some we're sending data sometimes other times sending nothing So we're not fully using the link So we're inefficient in that case and we deliver the packets on Now let's come to a case where we now have a second source sending to another destination this one on the top Sending packets across this red line to the destination here. We've set up a circuit already I won't show that process. We've set up two circuits This one has some data to send here some red packets. We'll see and we have our original blue packets to send as well What happens? Two packets are sent Then this one starts sending packets So packets start arriving at this switching node and as they arrive the switching node processes them and in this case They all need to go across this link Because both paths traverse this link and In fact, this is where we can get an advantage of packet switching when there was Just the blue packets to be sent across this link We were inefficient because we weren't sending all the time but now that we have a Total of five packets coming into the switch and all need to be sent across this link What we could do is send them one at a time similar to this we send Okay, the first blue packet. There's a small gap Maybe before the red one arrived Then we send one of the red ones than a blue one a red one and a blue one That is all of those five packets that come in are sent across that link You see there's fewer gaps between the packets or in other words, there's Fewer times when we're not transmitting on this link. We're more efficiently using the link We're sending all the time If we have a link, we want to use it to its full capacity If we don't we're inefficient so here's where an advantage of packet switching comes in because What is often the case is sometimes one node has many packets to send and others have few packets to send So we can take advantage of that by sort of combining them and sending them one at a time from different sources across the same link We got that advantage here. What if there was another circuit? virtual circuit going in this path Maybe we had some orange packets coming here ten orange packets coming. What do you think would happen at this switch? We had three blue ones coming in here two red ones here and maybe ten orange ones coming in here all at the same time similar time What happens when we send the packets out are we going to fully utilize the output link? There's we've got a lot coming in Maybe we've got capacity to send I don't know six packets out within some time period we should get full utilization of this output link because If we have let's say we can send six packets out per second Give some number and we've got many packets coming in and we keep sending packets out So utilization or efficiency will be high still because we've got many packets coming in we can send them out and Always be transmitting on this link. So efficiency is good in that perspective But the bad thing may be if you have many packets coming in and Only few coming out Those ones coming in may have to wait before they get sent out There will be some queuing delay That's what you see on roads at intersections. This is an intersection some traffic lights. There's a lot of cars coming from this direction a Lot from this direction and a lot from this direction and they most of them might want to go down this direction Well from the perspective of this portion of the road There will be many cars on it. It will be fully utilized Okay, the road will be full because it will be taking as many cars as possible So that's a good thing in terms of we're utilizing the road. We're utilizing the link. So that's good but the problem will be That the cars will start to line up here Because there's cars from all three directions trying to go in this direction. We cannot fit them all on So they'll have to line up and there'll be a queue of cars here and in each direction as they all try to go there So you'll get this long delay Waiting to be to go onto the output road or waiting for the packets to be sent So the efficiency can be good, but if there's too much coming in The delay goes up. That's the bad or potential bad thing for packets which you With circuit switching that's not an issue because the resources are reserved in advance We know exactly how much is coming in and how much we can send out Any questions and the last okay, and then we send those packets and this switch determines Okay, the red ones go in this direction the blue ones here and they're received okay and done So this shows an example of Trying to illustrate with packet switching we can increase the efficiency of using the links. That's the advantage We don't waste unused capacity Because we can send other people's packets across that link if if one one user is not using With circuit switching it can be very inefficient if we don't use that capacity if we don't use what we reserved but With packet switching The delay may go up if there's many people sending With circuit switching performance is guaranteed You get what you reserve Yep. Yeah, I how did I? Why did I choose blue red blue red blue? Well, I just randomly chose and I drew him like that. Okay What would happen is normally The first packet that comes in would be the first one that goes out so if Packets are arriving in here. It's a first in first out Q first one in comes as a first one out if two come in at the same time Then the the packets which could just randomly choose one of them But another way would be to give some packets priority over the others Let's say red user paid a lot of money for their network connection blue users just a normal user paid the low cost service Which may mean that the switches treat the red packets with higher priority So we'd see the red ones come out first and then the blue ones Or the red ones were associated with voice a Voice call voice over IP connection the blue ones associated with web browsing normally with voice And video transfer live interactive calls We want to give high priority because they need to get to the destination faster So there are things That we can do to give priority But we're not going to try and explain that any further Any other questions before we go back to our lectures Let's finish off with a comparison This one compares those three switching techniques From the perspective of how long it takes to get the data from source to destination circuit switching virtual circuit packet switching datagram packet switching And across three links in in this example with the first two We see we need to set up a connection We send some special request message and then some accept or response comes back Also with virtual circuit packet switching send a request packet get some accept or response packet back saying Yes, let's start the data transfer That incurs some delay As we see here if this is time zero it takes some time to get that packet there and get the accept back That's not present in datagram packet switching. We don't set up a connection. We just send our data as packets So there's no delay at the startup or setup here Once we've set up a connection in the first two we can send our data This is the data transfer phase data transfer data transfer first on the The two packet switching techniques The data transfer That is this period By the same under the same conditions. We we send the packets in the same manner What we do is we Create a packet attach a header to the data Transmit that packet across the first link Once the second node receives that packet it looks at the packet Maybe some processing delay shown here and then transmits it across the second link And once that first packet is received it looks determines where to send it Transmits across the third link and then that first packet is received by the destination And in this example, we were able to transmit packet two across the first link While packet one was being sent across the second link So we can calculate the time it takes to deliver that data. It's exactly the same here Importantly the switching nodes need to receive the entire packet before they send it on And there's often some small delay here. It's shown this small delay here of receive a packet process then send So the time to complete this depends upon The data size of course, but the header size how many packets And what processing delay is needed at each each intermediate node But with circuit switching with circuit switching all right Once we've set up a circuit We take our data that we want to send Transmit it out of the source And we think that that data goes across the first link through the first switching node Across the second link and through the second switching node and is received The switching nodes don't need to process that data If remember back to our picture, there's a link going line going through those switching nodes So the data just travels straight through it. There's almost zero processing delay There is some but it's almost zero effectively The time to deliver the data from source to destination depends upon the transmission delay That's this time And the propagation delay across the path from here to here The propagation delay of the path depends upon the propagation delay of the links in that path If it takes One millisecond to propagate across each link and then it takes three milliseconds to propagate across the path Just the summation There's no processing in the middle So with circuit switching the data transfer With the same amount of data under the same conditions will always take less time than with packet switching Because with packet switching we have extra header to send that's an overhead And we have to wait for the packet before sending it again So circuit switching the data transfer is always faster but Both circuit switching and virtual circuit packet switching have this setup which incurs an extra delay So which one's best in terms of total delay or which one's worse Or worse than another Can you say anything about the three? And just just to make it easier This is disconnect saying we've finished the data transfer disconnect. Let's forget about that And just focus on the time from the start From when we want to transmit data until the data is delivered Well, what can we say we can say? Datagram packet switching under the same conditions will always be faster than virtual circuit packet switching Because the data transfers the same but virtual circuit packet switching also has called setup So that's going to take a bit more time We can say circuit switching Will be faster than virtual virtual circuit packet switching Because effectively we had the same call set up. There may be some details in the That are different, but we have to set up the call the data transfer is faster with circuit switching than packet switching What about between circuit switching on the left and datagram packet switching on the right? Which one's better or which one's faster or under what conditions would one be faster than another? The the trade-offs are circuit switching. We must set up then transfer data So the disadvantage is the set up time With packet switching a datagram packet switching no set up time But we have a disadvantage of having to send headers For every packet and that we must wait for a packet before we send it on that is we have this processing time in the intermediate devices We cannot say one is always better than another. Okay, we cannot it depends upon the conditions generally If we have a lot of data so we if we increase the amount of data that we have then Circuit switching will be better than datagram packet switching Or faster at least less time Because with a large amount of data the call setup becomes just a small percentage of the total time The data transfer is the large proportion of the time With a large amount of data we need many packets and every packet has a header So and and we need some processing. So there's some extra delay with the data transfer So with a large amount of data doing a call setup is not so bad But with a small amount of data Doing a call setup and then sending the data is usually bad for performance That is we have a large delay because It's better just to send the data immediately Even if we have to send some header and do some processing of packets We can finish our data transfer faster in that case So there are some trade-offs between the the timing or the the time it takes to deliver the data in those three cases to finish we've got one more slide but Reminder circuit switching telephone networks mainly used in been around for a hundred years Home telephone networks Not not in all mobile networks in some The internet makes use of datagram packet switching the internet protocol specifically So in everything you do on the internet you're using a form of datagram packet switching It's very well suited. It's simple. There's no Delay of setting up the connection So when you visit a website You just send the data and the web server sends back the data There's no set up the connection and then send the data Where it's virtual circuit packet switching used in some wide area network technologies inside say an isp's network Or inside a campus network It's used for some cases This table summarizes some trade-offs Some of them we haven't discussed you can read through. Let's just pick out the main ones Some are obvious. Let's go to This is some important ones The second one, okay circuit switching we just send the data Packet switching we transmit packets and there's some overhead of headers Circuit switching and virtual circuit packet switching. There's some call setup delay In circuit switching once we've set up the delay the the call Inside the switching nodes. There's very small delay Okay, data just goes straight through But with both of the packet switching techniques, there's some delay of transmitting each packet That's the the negative in Where are we the second or last one says fixed bandwidth maybe read that as fixed Fixed assignment of resources Reserved resources So with circuit switching we reserve some resources bandwidth is referring to the capacity of the links But we reserve some resources and it's fixed but with packet switching It's we dynamically use those resources So what are the advantages and disadvantages by having a fixed set of resources? We guarantee the performance But it can be an inefficient if we don't use the resources Dynamic use of resources is more efficient because we use them whoever needs them But it may lead to some extra delay. We're now packet switching Uh and that comes back to this one in the middle If there's a lot of people sending data we start to overload the network. What happens in circuit switching We will block the next call We said that yesterday that Our network can support 100 voice calls at any one time The 101st person who tries to make a call will not be able to set up and make a call That we blocked that's the outcome Whereas with the packet switching techniques what normally happens is that we still get to send our data We're not blocked from sending data But the delay Of the packets being delivered to the destination goes up That's the negative So the data eventually gets through but just takes more time And I think the other main ones to pick out from that a set of A comparison of the techniques in the last 30 minutes also We'll move on to the next topic Before we do so any questions on switching We've omitted many details that is we haven't gone into much detail of how they work, but we've tried to introduce the concepts We saw with switching what we do in a communications network now. We have Just we want to send data from source to destination. We need to go via some path some multiple links But we saw at the start that usually we have multiple paths to choose from And we said that well, we choose a path and then send data using one of the switching techniques Now we need to go back and see well, how do we choose a path? How do we choose a good path between source and destination? the topic being routing and Again, we can give an analogy which Is in road networks when you're driving your car how do you You want to drive today after the lecture drive into the Go shopping Paragon or somewhere Which way do you drive when you come leave vankadee? Which road do you take or which roads which path do you take anyone have an idea? Okay And then then on the tollway So go on the tollway Via rung sit Are there other paths are there other ways to drive? There's another tollway. So effectively from the north. There are two tollways. Okay The changwatana one Of course, you don't have to go on a tollway. So there are many paths to take to drive into the same destination How would you choose a path? How do you choose which way to go? What would you consider? traffic jams What do you want to go to traffic jams? No, you want to avoid traffic jams avoid congestion. Okay what else Distance okay, don't drive 100 kilometers north And go out and then come down and make a 300 kilometer trip Consider the distance try and find the shortest in terms of distance anything else sorry The speed okay, some roads allow you to drive faster than others like the tollway or the expressway for example So if you use that even though the distance may be longer the time it takes you to get there may be shorter Okay, what else? Anything else What if you're Don't want to spend money on a tollway. Okay, avoid the tollways because you want to save some money for something else Okay, all right. I'm once off not a problem, but every day. So some financial costs You may make a decision Based upon the financial cost of using that road So there are different criteria for choosing the best path to drive from A to B Same in communication networks We need to choose a path to get our data from source to destination via different switching nodes There are multiple paths Choose the best path, but there are different criteria for defining what is best distance financial cost delay So we'll look at some of them Let's show Still on the road network Here's a grab from google maps to drive into paragon from here our campus so google Determined a path for me In fact, we'll see it determined several paths Here's the path one path. Don't worry about the details to get from A to B It records When I select that path it says the distance 31.4 kilometers So it's calculated of each of those segments of road how long it is not so hard and just add them up and you get the distance It's got a what a a typical time 30 minutes Most likely calculated from I'm just guessing really most likely calculated from knowledge of How long out the distance of each segment the speeds that you can travel on each segment and maybe some typical pass data for the last six months or so So it works out that typically it takes 30 minutes to drive on this path But it gives me another Piece of information. I did this about 8 a.m. This morning and it said in the current traffic. It'll take 46 minutes How did it determine that so what's the difference of it right typical time 30 minutes current traffic 46 minutes Well again, I don't know exactly how google does it but they collected some information about the current road Condition conditions the congestion the traffic jams Maybe there's a traffic jam in a particular location And somehow they've collected information about what's happening in the last 10 minutes the last 30 minutes the last one hour And from that their algorithm Determines okay if you drive now it's going to take you 46 minutes not 30 minutes and In fact there it gave me three options the second option 32.6 kilometers another path 36 minutes typical one hour five minutes Now current traffic 39.5 kilometers 39 minutes typical 56 minutes now So another thing that's important. Of course we see different criteria If I choose based upon distance All right in this case we're going to choose the same one. I think distance. I would choose the first one Current time I'd choose the first one but Comparing the the second option and the third one The third one has a longer distance than the second but a shorter time in the current traffic So it differs depending upon the criteria that we select so we need to define best based upon Some selected criteria But the other thing that's important here is that okay google has calculated in current traffic the time So somehow it's got some updates. Where would it maybe get updates from? How does it know what the current traffic is? This is google maps Okay, so everyone with their android device has some information going back to the google servers And from that they're getting information about well different things where The concentration of people are in cars. Maybe the motion of cars Maybe if they're running some application, it could be reporting back. So where people are connected. So they're working out There's a lot of people here it's slow moving Maybe that means congestion. There's a traffic jam there. That means they can update the current traffic Another potential source maybe Maybe not with google but with other systems the the traffic center for the city Okay, there's a there's cameras on many roads. They get feedback on traffic jams We could take that information and use it to update Our calculation of the time it takes Importantly so there's two factors there you need some source of information So that you can make an accurate decision if I Okay, if I use the time of 30 minutes 36 and 39 minutes And I chose the least time I'd choose the first path But maybe there's a traffic jam on the first path and the roads closed and there's an extra one hour delay So this one's up to what one hour 30 or larger Then I of course I shouldn't choose the first path. I should choose one of the others which would take me less time So important for me to make that correct decision I need to or the network needs to get some information about what the current conditions are Where the traffic jams are And it needs to get it on a regular basis So where do we get the information from and how often we get it is an important Is important in determining What the chance are we'll choose the best path The more frequent we get updates from the more sources The more chance we'll choose a path, which is optimal Which is the best path So that's with road networks the same applies with The same applies with Communication networks. There was another one. I thought this one All right, this is a fourth path A different one avoid toll. So this is one based upon financial cost of course If I wanted to be accurate, I could say give me a road that costs less than 30 baht or cost less than 60 baht that is consider That would Amit roads which have a toll a total toll larger than some amount. So we could be more accurate from that perspective So let's return to our communication networks This process of choosing the best path is called routing choosing the best path or choosing the best route And in our communication network, what's the best path from a to d? anyone 453 What that's one path If we said the criteria for best was the number of links we traverse And I think 453 would be The best one in that case because okay Well the number of switching nodes we traverse Here we have three here. We have four. This one has four. So 453 is the shortest in terms of number of links but what if I said We want to consider delay to send our data and if I knew That the delay to send from four to five was very high But the delay to send on this upper path Was very low Then this upper path may be the best path The best path depends upon the criteria we specify for what is best There are many different factors so Question what path or route should be taken from source to destination? Well answer choose the best path Well, what do we mean by best? And how do we choose that? We need usually some algorithm to calculate the best path given we define best Especially when we have large networks. So hundreds of hundreds of nodes Not just a simple example of six nodes Thousands of possible paths And you know I'm sure you know what's an algorithm for choosing A path Choosing a route Choosing a best path. What's an algorithm that you've studied in a previous class? I can't remember maybe dr. Booneer. It taught it in maybe someone else. I can't remember Yeah, you remember Shortest path routing You would have heard of dikstra's algorithm for finding the shortest path route Maybe you covered others as well bellman forward. So there are algorithms that given a network Will calculate and always find what's the We'll say that the least cost path The best path We're not going to study those algorithms. I'll assume your experts in them. We won't need them for this course We're going to routing is needed in all of the switching networks circuit and packet switching We need to choose the path. We're just going to use packet switching as the examples from now on So we need some algorithm to find A route or a path from a to b What are some requirements on that algorithm needs to be correct? The algorithm takes as an input source and destination As an output it returns a path A correct algorithm returns a path Which takes us from a to b an incorrect algorithm returns a path which takes us from a to c Okay, that's almost obvious that of course we need an algorithm Produces a path which gets us to the destination that we asked for If it doesn't if google maps we said we wanted to go to paragon, but sent us to Chiang Mai and of course that wouldn't be very useful for us. It needs to be correct It needs to be simple because usually simplicity Leads to Something that's easy and cheap to implement Because we sorry we need to implement this on computing devices We'd like an algorithm that is robust, which means that When things are not working in the network, it can still Determine the best path An example is that we find a path from let's go back to our example from a to d Which is four five three and then we discover over time that Switching node five is having some problems. It's it's not it's dropping many packets that come in So robust algorithm would allow us to adapt and maybe choose a better path So we would like that as a requirement But what we don't want is that okay, we choose this path start sending some packets Then our algorithm says that this path is slightly better So we switch to this path one two three send packets along there But then our algorithm says oh no now maybe four seven six five three is better And small changes in the network call cause rapid changes in the path that that we choose This is Some instability in the paths that can be bad for some applications have sending your data across different paths all the time may impact upon applications So we want robustness, but we also want stability. We don't want two frequent changes in the paths We want to choose the best paths optimal paths fairness is the issue that Some rowdy algorithms may be such that It chooses a good path for one pair of nodes But a very bad path For another pair of nodes That would be unfair between the second pair of nodes Unfair for them a better algorithm would be to choose a path which is maybe okay for both sets of nodes So there's this issue of We don't want an algorithm that always gives Short paths or low delay paths for some nodes and very high delay paths for other nodes We want one that it treats all nodes equally efficiency We'll see that when we implement algorithms normally we need to collect information from other nodes And collect it on a regular basis So that takes some transmission overhead because we to collect the information we need to send some data through the network And it may take some processing So any routing approach that we use we want to minimize those overheads We want it to be efficient. Let's look at an example to define some terminology the terminology here Using this example Six switching nodes n1 through to n6 There may be stations attached, but for simplicity. We just look at the switching nodes The arrows represent links So a link is direct connection between a pair of nodes There's a link from n1 to n4 a link from n4 to n5 A path Is one or more links to get from one source to a destination So a path from n1 to n6 is n1 and 4 and 5 and 6 A path from n1 to n4 is n1 and 4 A path may contain one or more links And there may be usually is multiple paths between a pair of nodes from n1 to n4. There are many paths So just some terminology a path is a set of links to get from source to destination Uh When we traverse a link we say that's a hop So to go from n1 to n6 via the path 1456 We go from n1 to n4. That's one hop n4 to n5 a second hop And to n6 three hops from n1 to n6 on that path. So a hop is a common Uh way to measure How many links we traverse To get from source to destination along some path Our links have costs associated with them and on this diagram the costs are shown as the numbers next to the arrows So the cost to send some data from n1 to n4 is one unit To send from n1 to n2 is two units And so on these are costs The costs are not financial costs They don't have to be at least The cost is some generic cost It depends upon what we define as the performance metric If we care about delay I want to choose the path from n1 to n6 with a lowest delay Then we could make the costs equal to the delay to send across that link For example, one means one millisecond Seven seven milliseconds and so on But if we care about financial cost Then we can say that the cost to transmit data across that link One bar per x bytes Seven bar per x bytes and so on Or if we care about Uh Throughput Then those cost values Can reflect the throughput on those links So When we say cost we don't necessarily mean financial cost. It could be any factor What do we miss topology? neighbor A neighbor is a And the neighbors of n1 are n2 n3 and n4 the nodes that we have direct connections to are our neighbors n5 has three neighbors n3 has one two three four five neighbors So the neighbors are those that we can directly connect to Also called adjacent nodes The topology of a network is the arrangement of the nodes and links. So this is one topology If we remove the links here, we get a different topology If we add a node here and some links and that's a third topology So that's the arrangement of the nodes and links the topology The goal for routing Is to choose a path from a to b Which gives us the least cost Choose a path with the lowest cost What's the least cost path from n1 to n4 in this example? easy one n1 to n4 It has a cost of one The cost of a path is the sum of the cost of the links So what's the least cost path from n1 to n5? Think if you check you'll see n1 and 4 and 5 and the cost of that path is 2 1 plus 1 just add up n1 to n6 least cost path n1 and 4 And 5 and 6 has a cost of 1 plus 1 plus 2 is 4 You can check the others And 6 to n1 you need to go and actually check most of the path to see it's not straightforward. Okay Note some new things here the cost of links may be different in each direction One unit to send to 4 7 units to send back to 1 that's possible Because the characteristics of the link may be different in each direction or the financial cost may be different. So in general The cost depends upon the direction from n6 to n1 From n1 to n6 is 1 4 5 6 But from n6 back to n1 is not necessarily the same path in reverse and it's not in this case I think if you look What do we get 6 to 5? to 4 Is a cost of 4 plus 1 plus then to 2 is 6 Plus 3 is 9. Is that right 4 plus 1 plus 2 plus 3 the 10 cost of 10 in that path Is there a lower cost path? 6 to 3 to 1 is 8 plus 8 is 16 What about 6 to 5 to 3 to 2 or you need to check all those paths? And that's what dykstra's algorithm does for us When we have a large network, we don't need to manually do it We use some algorithm that will take this information as input and return the least cost path from a pair from source to destination Dykstra's algorithm actually returns the least cost path from n1 to every other node Of course, we can then use that So what we do now in a communications network once we know the topology We choose some performance metric like delay or financial cost And we use that to allocate some generic costs to each link We get a picture like this And then we apply an algorithm like dykstra's algorithm to find the least cost paths from nodes to other nodes And then we use those paths In the last Five minutes, let's just introduce some other issues there So to do that we need what are the performance criteria? And a few other questions need to be answered First performance criteria Here are four examples Common one in a network is the number of hops choose the path with the fewest hops That's a common easy performance metric Financial cost choose the path which is cheapest for us to send our data Especially if we're sending data via different internet service providers Not from an end user's perspective, but from a company's perspective We may have different paths to reach a destination. We must pay to use their network Someone pays So there's some financial cost of using particular paths. So financial costs may be a factor Delay The time it takes to get a message across a particular link We can use that to determine okay now find a path with the least cost delay Throughput is another thing. I want to transfer a large amount of data from A to B Give me a path which gives me the maximum throughput So that four common performance criteria there are others It depends on what our requirements are as to which one we choose We'll skip these two. We'll come back to them next week. Let's go to network information source This is the current view of the network But things change over time So therefore the costs may change over time currently the The view of the network is this n1 calculates the least cost path to n6 being 1456 and has a cost of four units Okay, so n1 to get to n6 goes via 4 and 5 But then five minutes later the network's running The cost change on some of the links and this link from n5 to n6 Goes from 2 up to 20 For whatever reason that the cost changed maybe the delay along that link has changed because there's a lot of traffic across that link What that means is if n1 is using its path 1456 It's using a suboptimal path Because the cost of that path has gone from four up to 22 If this has changed to 20 here And there are lower cost paths than 22 from n1 to n6 for example n1 to 3 to 6 has a cost of 10 Which is lower than 22 So if the conditions in the network change We'd like to be able to adapt and choose the the optimal path Now how does n1 know That the cost of this link has changed from 2 to 20 Well, there are different approaches One approach is that on a regular basis every node Sends a message to every other node saying what its current cost of the link is Let's say every one minute Node 5 sends a message to everyone else saying the cost of my links are One and one one and one two and four Then it sends another message every one minute it does so and when this one changes to 20 It sends a message saying the cost of my links are one and one one and one 20 and four When that message gets back to node one Now node one knows this cost has gone up to 20. Let's recalculate and choose a better path Maybe whatever it is the second best path So for that to work We need to have nodes update information So where do we get this information from about the changes in the network? And how often do we get these updates? What is the source of information about the network? And what is the timing in which we perform updates what triggers an update and that The general trade-off is that the more sources of information And the more frequent the updates The more likely we'll choose the optimal path If we don't receive any updates Then we'll likely choose sub optimal paths if things change For example, if we didn't receive the update that that link had changed from 2 to 20 We'd be always using that sub optimal path So we want to receive information from as many nodes as possible But and as frequently as possible The problem though The more information that's sent across the network And the more frequent it's sent creates some overhead Because actually it needs to be sent across the network some processing overhead and some transmission overhead. We want to minimize that So there's a trade-off between frequent updates from many nodes good for optimal paths But bad for efficiency Because we need to send that information What we'll do next week is look at some options Some trade-offs between and trade-offs between them Let's stop there and continue that next week