 So, from yesterday we said performance metrics are the way to measure the performance of communication systems. And we're just going through a few common metrics, some that you may have already heard of or used, some others we may introduce in other topics as we go through the course. So let's look at some of the ones that we've introduced and I'll give a few more examples and a few more calculations that you can follow along just to make sure you understand. Let's get this out the road. First, data rate. At this stage, think of data rate as a characteristic of the devices or technologies that we use. You buy a laptop, you buy a laptop, it has a Wi-Fi device built into it, there's a chip inside on the motherboard for Wi-Fi or for wireless LAN. That sets or that gives us the maximum data rate that that wireless LAN or Wi-Fi device can send at. So think of it as a characteristic of the device or in some cases the link. In the next topic we'll look at, well, how can we achieve particular data rates? Why is it 100 megabits per second? Why isn't it 200 or a million megabits per second or whatever? But at this stage think of data rate as something that's given saying this is the data rate of this particular technology. It specifies the speed at which we can send bits out of our computer, think of. The data rate for my LAN card is the speed at which I can send bits out of my laptop onto the LAN cable. And they travel across the LAN cable and are received by the destination computer. I've plugged in, if you cannot see it, I've got my laptop connected via this blue LAN cable into my second laptop here, the blue one here. We'll do a few experiments with that. But first, some simple concepts. With data rate, let's make some notes and take some examples. With data rate, for example, actually I'll go direct to, let's see if it works. I want to find the data rate of my computer. I show many commands on the screen here. The purpose of this course is not for you to know these commands, okay? I'm not going to ask in the quiz or the exam how do you run this command on Linux to show the data rate. I'm just using it to show some data and some information that we have available about the network. For those that are interested, you can follow up in your own time. I want to try and find the data rate. And the LAN cable is referred to as Ethernet. The technology we use is Ethernet. So I need to use the Ethernet tool to look at my Ethernet interface. Let's hope this works. Shows me a lot of information. The one thing I care about is the speed. This tells me the data rate, the speed at which I can send bits via my LAN card, 100 megabits per second in this case. It depends upon the two endpoints. What data rates the two endpoints support. In this case, my new laptop supports 10 megabits per second, 100 megabits per second and 1,000 megabits per second, but my older laptop only supports 10 and 100, therefore they negotiate to both use 100. So therefore the speed of my link between my two laptops, 100 megabits per second. So we'll say the data rate, let's make note of what we have, the data rate and we'll do some calculations as we go for some things. In this example is 100 megabits per second. One bit, how long does it take to send? How long does it take to send for my computer to send one bit? We can send 100 million bits in one second, so one divided by 100 million will tell us how many seconds to send a single bit. So 100 million bits in one second, so one bit would take what, one divided by 100 million, 100 mega, which is what, 10 nanoseconds or 0.01 microseconds, is that correct? When I make mistakes in the calculations you get your calculator out and double check for me. Any questions? To send one bit takes 10 nanoseconds in this case. One divided by 100 million is 10 to the power of minus 8, which is 10 nanoseconds or 0.01 microseconds, this is mu or u here, it's not an m, 0.01 microseconds. So when my computer is transmitting on the LAN cable, every 10 nanoseconds it can send a bit, a single bit. So if I have a file, a file which is a million bits in length, then the time it takes to transmit across the link, a million bits, one bit takes 10 nanoseconds, so a million bits takes 10 million nanoseconds, is what, 10 milliseconds. So the data rate is the rate at which I can send bits out of my computer. If I was using Wi-Fi, the data rate may be different, say 54 megabits per second, the rate at which I can send wirelessly to the access point. For example, if I have a file, which is what, 1 megabyte, ignoring overheads, we'll come to overheads in a moment, how long does it take to transmit that 1 megabyte file? Calculate how long it takes to transmit that 1 megabyte file across my link. Anyone have an answer, or at least the way to find the answer? What's the answer? The file length is 1 megabyte. I can send 100 million bits per second, or 1 bit every 10 nanoseconds. How long does it take in seconds to send the file? What do you get? 0.125 something, okay? Be careful, 1 megabyte, the B here is a byte. So that's 1 8 megabits, lowercase B for bits, which is 8 million bits. I've got 8 million bits to send. My device can send 100 million bits per second. So how many seconds does it take? Well, 8 divided by 100. I'll say the time to transmit 8 million bits divided by 100 million bits per second, our data rate, which is really 8 divided by 100, which is what? And get the correct units. What's the correct units? Or a good answer here, again? Okay. Correct, correct, 0.08 seconds, 80 milliseconds. So the prefix, so there are multiple correct answers. So one answer was 0.08 seconds, correct, 80 milliseconds, correct, 80,000 microseconds, correct, 80 million nanoseconds, correct, okay, so you can use a different prefix, try and choose one which is easiest to write down or to say. So I don't mind in this case in seconds, 0.08 seconds or 80 milliseconds. I consider both of them, okay, they're both small numbers. So we've chosen appropriate prefix. For the other way, one bit takes 10 nanoseconds, 8 million bits is 10 times 8 million nanoseconds. 10 times 8 million is 80 million nanoseconds, which is 80 milliseconds. So it would take me 80 milliseconds to send the file across the link to the other computer. Any questions? Let's transfer a file between computers. I think I don't have one megabyte, I have a larger. And see how long it takes. Let me remind myself of what to do. The scenario that I have, if you can't see my computers, looks like this. I have two laptops, gray one and the blue one, and I have a cable between them, and I've set them up so that, all right, think of one of them is running a web server and one's running a web browser or a client. And I've set the IP addresses just for this experiment. The IP address, and we haven't really discussed the meaning of IP addresses, but some address of my computer, this one is 1.1.1.1, the blue one is 1.1.1.2. I want to transfer a file between them. I'm actually going to download a file from the server. So a file from the server to the client, and we'll see how long it takes. I've got a file on the server which is 50 megabytes, 50 megabytes. We just did a calculation for one megabyte file. This one's 50 megabytes. How long will it take? WGet just downloads the file from the server 1.1.1.2, the blue one, and the file is in the ITS 323 directory. And from memory it's called meg50.bin. It's a 50 megabyte file, exactly 50 million bytes. Now, let's hope this works. It's downloading. How long did it take? About 4.3 seconds. So 50 megabytes took 4.3 seconds. Coming back to our calculation before, how long did you expect it to take? Well, you could calculate. We calculated for a one megabyte file, what about a 50 megabyte file? Well, it's 50 times the size. It should take 50 times as much time to download. So 50 times 800 milliseconds, the time to download or to transmit, 4,000 milliseconds or 4 seconds. One megabyte took 80 milliseconds, 50 megabytes should take 80 times as much time, 80 times 50, sorry, should take 50 times as much time, 50 times 80 is 4,000 milliseconds, 4 seconds. How long did it actually take? Close. It took 4.3 seconds. So when we actually did the transfer, it took 4.3 seconds. We calculated based upon the data rate to take 4 seconds. Why are they different? Anyone want to guess what may cause the difference? Should it be exactly this? Have we made a mistake somewhere? Well, if we did send at this data rate, then it should be 4 seconds. Trust me, the file is exactly 50 megabytes. I set the file to be that. But it took it slightly longer. So even though the data rate is 50 is 100 megabits per second, with a data rate of 100 megabits per second, it should take 4 seconds, it took 4.3, something else happened. And we'll see that and see that there's some overhead. Some other things were sent at the same time. So it's not just purely sending 50 megabytes across the link. The protocols that transferred that file, what protocols, well, at least you know one of them is HTTP, there's some overhead involved with that data transfer, which means it takes slightly longer than our 4 seconds, 4.3 seconds in this case. We'll look in detail about some of that overhead in a moment. So in this case, what's the throughput? Throughput is the rate at which the real data, in this case the file, is delivered. 50 megabytes in 4.3 seconds. This is the time to transmit. I'll just make, this is based upon the data rate, this 4 seconds. But now let's calculate the throughput from our experiment. If you can't read my writing, just ask a question, okay? We transferred 50 megabytes in 4.3 seconds. So the throughput is the rate at which that 50 megabytes was delivered. So on average, we get a throughput of 50 million bytes, 50 by 10 to the 6 bytes, mega, 10 to the power of 6. Let's convert the bits times by 8 divided by 4.3, and you need your calculator to solve that one. 50 million times 8, convert the bits divided by 4.3 seconds, 93 million, approximately 93 million bits per second, 93 million bits per second. And if we wanted, we could convert that to bytes per second, 11.6 megabytes per second. I'm going to leave it in bits per second. Often when we talk about data rate, it's measured in bits per second, and therefore we'll keep throughput also as bits per second, but you could convert the byte if you like. So we'll not write it down just to avoid confusion. The data rate of the link was 93 megabits per second, but the throughput I achieved was – sorry, let's get this correct – the data rate of the link is set at 100 megabits per second, but the throughput I achieved is just 93 megabits per second. How efficient are we? What's our efficiency? How efficient are we using the link? The data rate of the link is 100 million bits per second, but we transfer data at 93 megabits per second. We'd say we're 93% efficient. The maximum speed is 100. We're getting 93, 93% of the link that we're using for data transfer. Let's make note of that. So just coming back, the throughput, the data rate was from before was 100. That was a given. We could say in this case the efficiency is 93 out of 100 or 93%. I'm using my link to transfer real data at a rate of 93% of the time. The other 7% is generally overhead. Sometimes we cannot avoid that overhead. It's part of how the system works, but we often want to know how efficient a particular communication system is. Any questions so far? Today, we're just going to go through, I think, just similar examples like this to demonstrate some of our metrics. Everything okay? Ready for the quiz? Questions like this will be in the quiz, the online quiz, to give you more practice. Okay? Sure? Not sure. What are you not sure about? So, yep, question. Okay, good question. How did I go through the one-bit case? Let's go up and see what I did. So think of the data rate as the maximum speed I can send bits. Sometimes we call it capacitors. The fastest speed I can send bits across the link, and in this case we always send them at that speed. So the data rate is given for a particular technology. I buy the LAN card. It says when I buy it in the spec, 100 megabits per second. So 100 million bits per second. So one question I have is, well, to send one bit, how long does it take? If it takes 100 million, well, if it takes one second to send 100 million bits, then how long does it take to send one bit is that one second divided by 100 million. So one divided by 100 million gives us the time to send one bit. So to transmit one bit takes 10 nanoseconds. To transmit 100 bits takes 10 times 100, which is one microsecond. To transmit 100 million bits takes a million times one microsecond. A million times one microsecond is one second. So it's just the time. The time is if you drive at 100 kilometers per hour, if you drive at 100 kilometers per hour, how many hours does it take you to drive one kilometer? One divided by 100. All right, you can convert to minutes and seconds. Just a speed, data rate, and time. Then given that speed, and note, the data rate we send our bits at a constant speed, every bit is sent at that speed, 100 million per second. So one bit every 0.01 microseconds, 10 bits would take 10 times this, a million bits would take a million times this, eight million bits or eight megabits or one megabyte would take eight million times this, which turns out to be 0.08 seconds. Any questions, further questions? Make sure you're clear on this. These basics are going to be assumed as we go through the next topics. If you don't understand now, then ask and we'll make sure it's clear. I have another file on my blue laptop, 100 megabytes. How long will it take to download? I have a second file to download, 100 megabytes. How long will it take to download? Estimate how long it would take. What do you think? Okay, how did you come up with that, eight and a half? Just guess, a wild guess. Okay, correct. I downloaded a file which was 50 megabytes in 4.3 seconds, so you'd expect if I download a file which is double the size, it would take double the time. If it was 100 megabytes, it should take 8.6 seconds. Now in real experiments, things are not exact, but let's try and see if it's close to 8. something seconds. Just make sure my network's working. I've got another file called Meg 100. Downloading. Let's hope this works. 8.5 seconds, exact. He said 8.5, well, it's close to 8.6. So the throughput in this case is about the same as before, and that's what we'd expect. This link just has two computers attached. No one else is using it, so we'd expect they'll always get about the same performance. There may be some small differences in overheads, which we'll see in a moment. So think of data rate is the fastest speed at which we can send bits across a link or across a network, whereas throughput is the actual speed at which we can deliver our real data. It takes into account that we have other overheads to send across the link. We may see some more calculations. Let's look at overheads and see, well, what do I mean when I say overheads? What are we going to do? Here's our scenario. What I'm going to do in the experiment is transfer a file using HTTP. What we're going to do, and we'll do it in a moment, is I'm going to use HTTP used for web browsing. In my browser on my client, I'll visit a website on the server, send a request to the server, and the server will send back the web page. And we'll look at the overhead involved in doing that to see, well, what else do we need to send? Before we do it, let's make sure everyone's on the same position. Okay, make sure you're clear. Any questions? You sure? Okay, efficiency is something we did quickly. We'll come back to another example of efficiency in a moment. Clear now? A bit clear. 100 megabits clear? Okay. Yes, you need to study. If things are not exactly clear, then, especially the calculations, I'm not going to spend much time on calculating things. If you bring your calculator to lecture, if you like, to check, you'll use your phone to check. It's okay. Let's do this experiment. Just visit a website. But at the same time, I'll try and record and see what happens when I visit the website. And set this up. Just wait while I get ready for this test. What I'm going to do is while I visit the website, I'm going to record the packets being sent, the messages being sent, so we can look in detail about what was actually sent between the two computers. So I'm using this software to record. We'll zoom in in a moment. And let's, sorry, let's go to my browser. Let's hope this works. 1.1.1.2, the other computer. It has a web server on there. And the file I want, I think, is this one. Okay. So that worked. I visited, I typed in the address of the other laptop. And I know that there's a web page on there that we want to download. And it downloaded to this first laptop in that case, so the browser. So this is the HTTP request and response happened. The data is the web page. That's the real data in this case. This is the actual messages sent. And there's a lot of details here. There are something like 12 or 15 messages sent between the two computers just to transfer that web page. There are different protocols in operation to do that data transfer. It's not as simple as send one message and get a response. It's more complex. We don't know much about what those protocols are doing at this stage. At the end of the semester, we should know, what is TCP? Why did it do these first three messages? I don't know yet. You do know that HTTP sends a request and a response. This message is the request. It's from computer one, going to computer two, source destination. And it's saying, I want to get the file it.html in the directory its323. This is the request. There's some other message here. And then eventually this one's the response. It's coming from computer two, going to computer one. It says everything is okay. If we look in the details, so the way this software works, when I highlight a message or a packet here, the details of that are shown here. So if we look in the details of this response and expand this, we'll see the response contains the web page. I will not show it all, but this is the actual web page that was shown, the HTML that was shown on the browser. This is the data. How big is the data? Hard for everyone to read. But down the bottom it says here, it's 291 bytes. If you look at the web page, that HTML page, if you saved it, you'll see it's 291 bytes. So the data is 291 bytes. But how big is this message? Well, the length is 616 bytes. The data is 291, but the total size is 616, so that rest, the other part, is the overhead. The other 300-odd bytes is what we call overhead. Just record those numbers. What do we get? Just focusing on the response. There are other messages, but if we look at the response, it was total size was 616 bytes. The data is 291 bytes. So we'd say the rest is overhead. There's a 325. The message is made up of real data plus overhead. Other information that supports the operation of that communications. We'll see some examples of it in a moment. The overhead is one of the reasons why our throughput does not match the data rate. Because we need to, in addition to sending this data, we need to send some other stuff as well. And that takes time. The throughput measures how fast we deliver the data only. So therefore it's less than the data rate. What is the overhead? Or what are some examples? Well, let's look at this individual message. The data is the web page. All of this. But there's a lot of other things included in this message. Some of them that we may recognise. There's the source and destination address. They are included in the message. So they contribute to the overhead. So that's typical in most protocols that you include inside the message who sent it and who's it going to. And that takes up extra space. HTTP includes other things like the time and date. So when the server sends back the response, the server time stamps the response saying this response was sent on the 19th of August at 4 a.m. in universal time. The server software inversion. So the server sends back in the response something that identifies the web server. Some other things which are useful for web browsing to make things a bit more complex. This content was encoded. It was compressed. Some things just to make your web browsing more convenient when you revisit that website so that you can visit a cached version and so on. So all of these, this data is overhead, which is usually needed for the protocol to work. If we didn't have it, overhead would be lower, which is good, but the protocol may not work correctly. We may not be able to deliver the data. So it's a trade-off between having enough information for the protocol to be able to work, but not too much such that overhead is too large. At this stage, we're not concerned too much about what this information does. As we go through individual protocols, we'll see that. Just be aware that we always have some overhead. Any questions about overhead? Okay. Now, it's much more complex than what we've just considered. In this web page download, the overhead is not just inside that packet. It's all these other messages that had to be sent just to get that one web page down to my computer. So there's a lot of things happening in the background that make the very hard to predict what the performance will be. We cannot easily calculate the total overhead. It gets more complex. With respect to this one message, not the total transfer, but this one message that was sent, we could talk about efficiency. Efficiency in delivering the data, 291 bytes of real data sent in a total of 616 bytes. So for this individual message, we can also calculate efficiency. It's not common to do that, but we could, which is 0.472. That is 47% of that message contains real data. 53% contains overhead. That's all we're saying. We can express efficiency as a decimal, so as a ratio or as a percent. We've looked at an example of data rate and the data rate, the measure, think of the raw rate at which we can send bits, 100 megabits per second was our example. When we communicate, we use protocols, and protocols introduce some overhead. The overhead is usually needed. We can't avoid it always. We can minimize it sometimes, but there's some overhead, which means that the rate at which we deliver real data is lower than the raw data rate. And the rate at which we deliver real data is called the throughput. So we often measure the throughput, because that's really what we're interested in. But it's quite hard to calculate sometimes the throughput in our simple case we could. So we've got overhead, throughput, data rate. Efficiency is a measure of the ratio of how we use our resource for the intended purpose. So the ratio of how much real data was sent in terms of the total data, or if we go back to our other one, what do we have? In this example, we said we had a data rate of 100 megabits per second. My link supports 100 million bits per second. With our file download, we achieved 93 million bits per second. We can say in that instance, our efficiency is 93 out of 100, or 93%. The efficiency of using my link was 93%. As we go through and look at different protocols, we'll use these metrics to compare them. Compare one against another. Which one's more efficient? Which one gives us the best throughput, for example? Just go back to our lecture notes. Data rate, delay will come to in a moment, error rate. Overhead we mentioned, throughput we've mentioned, and efficiency we've mentioned. Any questions on those four before we go back to the error rate and then delay? Okay, keep moving. Error rate. I don't have many examples of error rate or many network examples, a few basic ones. Error rate is, if something goes wrong, we're transferring data and some data doesn't get delivered successfully. One percentage of that data doesn't get delivered successfully. So if I sent 1,000 bits and 23 bits arrived in error, we'd say the error rate or the bit error rate, 0.023, 23 out of 1,000, or 2.3%. Let's look at some examples of the impact of errors on data and then we'll return to delay. What have we got? I've got a, let's say we send an email. Someone, we used the example yesterday, you submit your assignment via email, you send me the answer via an email and that gives you your grade. So you send me an email and let's have a look at the email that you send. What is it? Don't worry about this. Let's say you send me this email. It's just, here's my submission for ITS 323. The answer's at the bottom and the answer is 42. So you send me this, it's the correct answer, good, I receive it and you get full marks. But what if when you wrote this email, it's sent across the internet, across our communication system, what if there's some errors somewhere? That is, you send it across a link and there's some error in the link such that the bits don't get delivered correctly. So what I receive, some of the data is not what was transmitted. So I've got some examples where I changed just randomly or pseudo randomly some characters or some bits in the source data. I changed the bits and we'll see what impact that has on this message. Think about the information that's trying to be communicated here. The student is trying to get their submission and their answer to me. That's the aim of this communication system. If they send this email but there are some errors on the network or in the communication system, what I receive will be different. This is the case where I receive a message where there's one error in there. I changed, in fact, one bit, I think, or one byte to be more precise. Everyone see it? Can you see the error? But you can see it, hands up, okay? Anyone else? Okay, there's just a change of a single bit in the source data and that caused the character to change. So you sent the word this, but there was an error in the communication. So I received a message that says T-I-I-S, okay? I can still read the email, I'm okay, you still get your marks. That was with one error. The next one is the message with two errors, okay? You'll quickly see what's happening. Two errors, the same one as before, this. And quickly, it's hard to see until you recognize it's submission. So just random errors cause the change in the received data. Next one is three errors, there's a third one there, okay? A-O, it was an, okay? Just a single bit is changing that causes this to change. Next one's five errors, all right? I received this email and I think, well, they're not very careful when they type the message, but they've still got the correct answer. They're okay, that was five errors. Next one's 10 random errors, and I received this. And there are a few errors through there and unfortunately your answer is wrong, okay? So you get an F for this assignment. So this is just a simple demonstration that a few changes or a few errors in the communication system cause the information to be received to be incorrect to be wrong. Now the error rate, we will not calculate it here. But the error rate in this case would be how many, so in this case there were ten bytes or ten characters in error here. And the total number of characters, if I counted them, I don't know how many, 50 or 100 characters in this message. The error rate will be ten out of the total number. With data messages, normally a few errors will cause problems. It will cause the receiver not to be able to understand the message. Here's our photo from yesterday. This was the original, the TIF image, this 47 megabyte photo. That's the original. I introduced a few errors inside this. So the next one, this one has one bit in error. Where is it? Well, I don't think you can see it, I don't know where it is. I randomly change, or I change one bit. So in the source file, this is the original. I added it one bit, just one bit of those 47 million bytes. And effectively that changes one pixel. Remember each pixel in this case is a red, green, blue color. An 8-bit value, it changed one of them. So somewhere in this one, there's a pixel which is not exactly the same color as the previous one. So an error in this case, we can hardly recognize it. The human cannot recognize it in this case, a computer could. And I think if you can extend this, what if I change many bits by just a small amount, instead of this shade of blue, a slightly different shade of blue. With many bit errors here, you would still not detect it. So with images, with video, with audio, bit errors, errors in our data transmission, do not necessarily mean that data is useless. In this case, the data is still useful, the information is still conveyed. This one I changed about a thousand bytes, or a thousand pixels I think I changed. Can you see it? You can almost see it if you zoom in in this case. I changed a thousand pixels to be, they were blue before I changed them to be black. I don't know if we'll see it on this screen. I could see it on my computer, but that may not be possible. Not the white dots. The black dots, all right, maybe you need to be close. There are a few black dots there. That is I just changed some blue pixels to be black pixels. Again, the information is still conveyed. If the information was to be able to look at this picture, even with a thousand errors, it's hardly noticeable. Last one, I changed a megabyte. This fire was 47 megabytes. I changed one megabyte to be just random pixels, and that's what you get. So just random pixels. Obviously noticeable in this case. So one out of 47 megabytes is just completely random. But still, the information may be useful. If you receive this image, you still may be able to find it useful. That is, you'd notice this, but the information can be useful. For example, if the question is how many people were standing on the craft, even with a lot of errors, you can still get that information out of that received data. Generally in images, video, audio, errors do not have a significant impact compared to text and files. We could actually calculate the error rate. We'd count all the bits in error divided by the total number of bits. In this case, it's one million bytes in error divided by 47 million bytes. One out of 47. Let's go to our last metric, delay. And we need to spend a bit more time on that one. What are we doing? Remember, I'm still connected. Let's get rid of this. We've still got connection between my two laptops. I want to measure the delay across my link. I want to see how long it takes to get a message actually to client to server and also back. And yesterday we saw PING. PING is an application that allows us to send a message there and get a response back. You ready for PING? Any questions? Okay, let's see if we can PING from the client to the server. What you'll see here is the client. I'm going to PING and then I'll explain what happens when I do a PING. And PING actually sends a message and repeats. And I want to repeat it just for 10 times, a count of 10. And I'm going to send a message which is 100 bytes. The size is 100. So my client will send 100 byte message to the server. And the server will send back a response of about 100 bytes. It's a little bit more, but about 100 bytes. From my computer to 1.1.1.2, the server computer. Is that correct? And we see in the rightmost column the time it takes to get the response. Does it 10 times? Sending a message of 100 bytes and gets a response of 108 bytes. The service adds on a little bit extra. So it's about 100 bytes. And the time it takes is listed here for each 10 times. It varies a little bit. The average, if you want to focus on that, is this one. That's the average of those 10 values. In milliseconds, 0.602 milliseconds in this case. Let's try and draw that and record it and then change some conditions. So the way that PIN works, what happens? A request, a small request is sent. And then the server sends back a response. Request, response. And the protocol, we'll just call PIN, but it's got another name, but we'll just call it PIN. And it repeats doing that. We did it 10 times. So 10 requests, each request gets a response. The time between each request was about one second. Send a request, get a response. One second later, send a request, get a response, and so on. We sent a request in the first case of, in that first demo, what do we send? About 100 bytes. And the time we got was 0.602, the average time, milliseconds. 600 microseconds. Why? Why is it that value? That was the average of those 10 values. Why do we get 600 microseconds? And roughly, we sometimes, that's the time to get there and back. PIN measures what we call the response time or round trip time. So I send the request, I start my counter. Server receives the request, server responds. I receive the response, I stop my counter, my timer. And it comes up with 600 microseconds or 0.6 milliseconds. Why did it take that long? Or why did it not take a shorter or a longer amount of time? What do you think caused it to be about 600 microseconds? Anyone? Let's try some different values and see if things change. And then we will try and talk about what causes this time. I'll do it again, but I'll change the size of the message. So each message sent instead of 100 bytes, let's change it to 500 bytes. Five times as large. What happened? We get about 900 microseconds, 0.898 in this case. Before we had about 600 microseconds. By changing the message size, it's had some impact. So this is approximately, well, 600 microseconds. This is approximately 900 microseconds. There's some variation in there. One more, 1,000. So I increased the message size to 1,000 bytes and it's up to about one second. Sorry, one millisecond, 0.982 milliseconds. We could try others. We may say some variation, but I think you'll see the larger the message we send, the larger the delay. Why? Explain. You're correct. I think you're on the right track. Yes, that increasing the size of the data has increased the time it takes to get a response. Why? Think about the data rate. The data rate for my link is fixed. 100 megabits per second. To send 100 bytes, 800 bits would take some period of time to transmit. To send a large amount of data would take more time to transmit. The more data to send, the longer it takes to transmit across the link. Therefore, the larger the delay of getting a response back. So yes, the larger the data, the larger what we call the transmission time. And we sometimes talk about the transmission delay. Transmission time or transmission delay. On one of the slides it defines this, but let's just do it here. Generally, the transmission delay is defined as the data size divided by the data rate. I think if you look on the slides in the delay in detail, you'll see a more precise definition. But the time it takes to transmit the data out of our computer. For example, if we have 100 bytes, what do we have? On my computer, you can think it's transmitting bits out. We want to know how long it takes to transmit those bits out. Here. Well, we know we can transmit 100 million bits per second. So if we know how many bits we have to send, then we know how long it takes to transmit those bits. So if we have, what, 100 bytes at a data rate of 100 megabits per second, that was the characteristic of our link, then the transmission delay, how do I denote it? Trans. The size, 100 bytes. Converted to bits. Remember when we do calculations, we need to use the same units. We can't mix bits and bytes. So 100 times 8, the data size, divided by our 100 megabits per second, 10 to the power of six, eight microseconds. So the time to transmit a 100 byte message out of my computer takes eight microseconds. This is a U or a mu character. It's not M, micro. How long to send 1,000 bytes? 1,000 bytes would be 100 bytes, 8 microseconds, 1,000 bytes is 80 microseconds. Easy, okay? 10 times the size, 10 times as long. Simply the data size divided by the data rate. So one component of delay is transmission delay. The larger the data we have to transmit, the larger the transmission delay and the larger the total delay. What else? What else causes the delay to be 600 microseconds or 1,000 microseconds in our examples? So for example, with our 100 bytes, the transmission delay is 800, is eight microseconds. It takes me eight microseconds to send the message out onto the link. It gets to the blue computer. The blue computer takes eight microseconds about to send the response back. That's a total of 16 so far. But our results saw that it was 600 microseconds. Transmission delay accounts for 16 microseconds, but the total delay was 600 microseconds. Where are the other 580 microseconds coming from? What else causes delay? We haven't got the signals yet, but we've mentioned them a few times. We actually transmit our information, our bits as signals. For example, an electrical signal across my cable. So a signal comes out of my computer, think of some waveform coming out of my computer onto the cable, and it flows across the copper conductor, and then it's received at the other computer. That takes time. The time it takes to get a signal from one point to another depends upon the distance, the physical distance, and the speed of the signal being sent. And unless we know otherwise, we'll assume that the speed of a signal is the speed of light. It's usually less for electrical signals, but let's assume it's the speed of light. That's another form of delay called propagation delay. Argation. I always spell it wrong. And it's determined by the physics of sending a signal. The distance at which we need to send the signal and the speed at which we can send it. How long is my link? Anyone? This blue cable is about one meter, maybe a little bit longer. The distance of my link, in my case, was one meter. And how fast does a signal travel across there? Let's assume for simplicity the speed of light. It's actually a little bit slower than the speed of light, but let's assume it's the speed of light, which is what? About. The speed of light? 300 million meters per second, roughly. Therefore, the propagation delay is 1 divided by 3 by 10 to the 8. What is it? 1 divided by 300 is .33. You can check that one. About .0333 microseconds. 3 nanoseconds. 3.3 nanoseconds to send a signal across one meter. That's a game micro. Sorry if that comes out clear. Our transmission delay of one message was 8 microseconds. To send the 100 bytes out of my computer takes 8 microseconds. To get the signal that represents each bit from one computer to another takes about .003 microseconds. Much, much smaller compared to the transmission delay in this case, because it only needs to travel one meter. So, so far we know that there's transmission delay. Depends upon the data size. There's propagation delay that depends upon the link distance. But our total delay was still something like 600 microseconds. We've got so far still about 16 or 17 microseconds. Where's the other delay coming from? Anyone want to guess? Where are the other 500 or 600 microseconds coming from? What's the delay? The cable quality, correct that the cable is such that the speed of transmission is not exactly the speed of light. It's much slower. It's about 200 million meters per second. But that doesn't change this much, OK? So, yes, it's slower, but still it's in the order of nanoseconds. My computer is the other component. Because what happens, the application, when I type PING and press Enter, that triggers the application to do something, to create a message. A 100 byte or a 1000 byte message. My computer, the application, the operating system, the LAN card goes to work processing that message, passes through the operating system, and then to the LAN card via some driver, and then is sent. There's some delay inside the computer to process the message. We cannot avoid that. And same at the receiver. The LAN card receives the message, passes it to the application or the operating system and the application on my receiving computer, processes it, takes some time, sends back a response. So the other component, or the major component in this case, is also what we call processing delay. Which is really the time it takes the computer to process the message. We don't have an easy way to calculate that. It depends upon so many different factors. It depends upon the computer. So this computer would have a different processing delay than another one. So in our course, we will not attempt to calculate it. It's probably in the order of 500 microseconds in total. So maybe 200 or 300 microseconds on each computer of processing. It depends upon your CPU, your hard disk, what other applications are running, your operating system, and so on. So the total delay is made up of these components. Transmission, propagation, and processing. And we'll finish on this last slide. Delay, the time it takes to get data from one point to another. It's made up of multiple components, which are additive. If we know the individual components, we add them up to get the total delay. Transmission delay, the time to transmit the data on a link. Propagation delay, the time to send the signal across a link. Processing delay, the time it takes the computer to process. And another one we haven't seen yet, we'll see in more complex communication systems is queuing delay. So we'll introduce that next time. Next week, we'll go into delay and via these slides a little bit more detail than that. We'll finish this topic. And then we'll move on to what our signals look like. And some of this will make more sense once we understand signals.