 In this lecture, we're going to summarise what we've learnt about communication signals and especially what we call the trade-offs between the different characteristics like the advantages and disadvantages. So a lot of it's going to be reviews, so I'm going to be flicking between some of the slides, the set of handouts that you've filled in, the signal equations and calculated some values and maybe a few new things. I'm going to try and summarise for signals, why do we care about bandwidth, what's the relationship with data rate, how do errors come into play and a few other concepts. The first thing I want to talk about is the relationship between bandwidth and data rate. What's the general rule or the trade-off? If we increase bandwidth, what happens to data rate? Up or down? Increase bandwidth, data rate goes up. That's our general rule that the more bandwidth we have, the more bits per second we can send. Given the other factors are the same, they do not change. So let's look at that with a simple example or an extension of the example we've looked at already. We've seen this one, I think if you go to this page, we've written a signal equation for this red signal, which has how many components, if you can remember? This one had two components, which means it was created from summing two sine waves together and they had particular frequencies, those two components and from that we could determine the bandwidth. So I'm going to use an equation for which produces a signal that looks like this one but I'm going to change the scale. This one was a scale under one second, I'm going to make a few changes but it's going to be a signal with two components. So this is the exact same shape but a different scale. Let's first write a general equation for this signal and note that, alright, so the signal S of t, two components, so we need two sine functions and remember the general equation for our sinusoid waves is 2 pi f t plus the phase. For simplicity the phase is going to be zero, it'll make our life easier, 2 pi f t, let's write it as 2 pi f t but there's a second component and the pattern that we used was that the second component would be one third the amplitude of the first and three times the frequency, 2 pi. If the frequency of the first component we denote as the variable f then the frequency of the second component we can write as 3f, just a general equation for this signal. We don't know f yet so f could be, take different values. I think in the example that we use we use f was 2 hertz, is that right? So you had something like sine 4 pi t, f was 2, so it becomes 4 pi t, one third sine 2 times pi times 3 times 2, f is 2, 12 pi t. But also we had an amplitude or a multiplier at the front of both of those signs of 4 over pi. That just changes the height, there's 4 over pi gave us a height of something like about 1.3. I'm going to leave that off just to save space because it could be any value, it could be 100 sine 2 pi f t plus 100 times a third sine 2 pi 3 f t, it would just change the scale on the vertical axis. But it's of not of interest in this example. So what we want to look at is the relationship between bandwidth and data rate. So let's try and given a particular bandwidth, see what data rate we can achieve. Let's say the bandwidth that we have BW is 2 kilohertz or 2,000 hertz. So our link or our transmission medium allows us to send only 2,000 hertz, a bandwidth of 2,000 hertz. We want to generate this signal. What frequency should we use? Such that the bandwidth is 2,000 hertz, again 1,000 hertz, why? The bandwidth is the difference between the maximum frequency component and the minimum. Here we only have 2, so it's the difference between the frequency of the 2 components. The first component is f, the second component is 3f. What's the difference? 2f. So in general, the bandwidth equals 2 times the frequency for this equation. We have a bandwidth of 2,000 hertz, so f should be set to 1,000 hertz. That's in fact the fundamental frequency of that signal. The signal shape repeats 1,000 times per second. What's the period of that signal? It repeats 1,000 times per second. How do you calculate the period? It's the inverse of the frequency, 1 divided by the frequency. So just 1 divided by 1,000, which is 1 millisecond on our plot. I will not draw it, but this point here would be 1 millisecond. It repeats every 1 millisecond, so the total duration of the plot will be 2 milliseconds. Now let's continue with our normal assumption that we've made in the past in that for half a period, when the signal is high most of the time, that represents a bit 1. And when the signal is low, it represents bit 0. So in this plot, there are in fact four bits transmitted, 1,0,1,0. What's the data rate? I'll just write it as dr for data rate. In one period, we transmit two bits in one period. So in this duration, there's a high bit 1 and a low bit 0. So two bits in that one period in this example. So two bits every 1 millisecond in one second, that means 2,000 bits, 2,000 bits per second. So just in this specific case, if we have a bandwidth of 2,000 hertz, we can achieve a data rate of 2,000 bits per second. Any questions yet? Just on this one. How did I know the one period transferred two bits? Good, that I defined that. I said that's what it is in this example. Because that would be defined by what we call the signal encoding technique. How do we map bits, 0s and 1s, to our or more generally data to our signal? Well, we need to define both at the transmitter and receive some scheme, some technique that says bit 1 is going to be represented by a high signal strength for half a period. Bit 0 is going to be represented by a low signal strength for half a period. But it could be different. We could have defined it as high then low represents bit 1, and although not on this plot, low then high represents bit 0. We could have defined it differently. But just in this example, high for half a period is one bit. We may come back to that and see the relationship of that technique shortly. Let's try the same shaped signal, but with a different bandwidth. Let's say double the bandwidth, 4,000 hertz. Same signal equation, what should the frequency be? It goes from F to 3F, a bandwidth of 2F. But we said the bandwidth will be 4,000. 4,000 equals 2F, F equals 2,000 hertz. So we'd achieve a frequency of 2,000 hertz. Period is 1 divided by the frequency, half a millisecond. And using the same signal encoding scheme, what's our data rate? In half a millisecond, we get two bits. So in one millisecond, we get four bits. So in 1,000 milliseconds or one second, we get 4,000 bits. What's the point here? Increasing the bandwidth increased our data rate. That's the point of this example. A couple of points to, so that's the key point from this example, but there's maybe some other things you notice. So yes, increasing bandwidth, if you went up to say 8,000 hertz, we can do it, 8,000 hertz bandwidth. That means the frequency must be what? 4,000 hertz, because it's 2F from 4,000 up to 12,000. Gives a bandwidth of 8,000. The period would be 1 quarter of a millisecond. And the data rate, 8,000 bits per second. Key point, increasing the bandwidth increases the data rate. Given other factors are the same, given we're using just two components, if we change the number of components, then the data rate may change in a different manner. But given everything else the same, higher bandwidth, higher data rate. Now, some things you may also notice. The data rate equals the bandwidth, apart from the units, 8,000, 8,000, 4,000, 4,000. Is that always true? No, that only happens because of our particular encoding scheme. I said one bit per half a period, or two bits per period. If we had a different scheme where it was one bit per period, instead of two bits per period, we'd get a different data rate. But the trend still applies. Increasing bandwidth increases data rate. What about increasing frequency? Turns out here also increasing frequency leads to an increased data rate. But we normally think of it from the bandwidth perspective. The bandwidth is what matters in practice. We usually need to select the bandwidth that we have available. We'd usually choose the frequency depending on the physical characteristics of where we want to transmit. If I want to transmit a signal that will go through walls, I would choose one frequency. If I want to transmit a signal that just maintained within a room or within a few centimeters, I may choose a different frequency. The bandwidth is usually chosen to minimize the range of frequencies we use to reduce the cost. So that's the thing that we care about. Increased bandwidth, increased data rate. That's our first trade-off. It's listed through some of the slides, but today we'll just, in one place, list all these trade-offs. So I'll write them down. You'll see this written down maybe in a couple of slides in the summary of the bandwidth and data rate section. And I'll say that's a good thing in that we normally would like to increase the data rate. We'd like to send as many bits as possible usually. So we think an advantage or a good thing about a communication system to get high data rate, use a higher bandwidth. Can everyone see the I there? Increase, move across a bit. Let's just list the trade-offs that we go through that we know already and a few new ones. What else have we seen in the past? If I go through to, if we go back to one of the previous examples on this handout, we can look quickly. We go up this one. Remember this one? For C and D, when we had this analog signal sending data, we had two different cases. The signal shape's about the same. The same signal element duration, 0.01 seconds. And we're just using, I think, the signal frequency is the same. If you look at the frequency of the sine wave, they're identical. What was the data rate for the first case? You've calculated it before. Remind us of the answer. So this is in the slides or the examples. What did you get? 100 bits per second, I think, we calculate the data rate in this case. We had a particular mapping that we defined. I think it was something like, for the signal element duration, if we transmit a wave that goes up first, that's a bit one, and down first is a bit zero. That was our signal encoding technique. And we got 100 bits per second. Then in example D, what data rate did we get? 200 bits per second. Because even though we have the same signal element duration, the same signal frequency, what we did differently is we said we have a different encoding technique. We said there are four different shapes. We had a phase of zero and a small amplitude, a phase of zero and a high amplitude. I think it was a phase of pi over two, such that it went down first and a high amplitude and a low amplitude and a phase of pi over two. We had four different signal elements to represent our data. And because we have four elements, in each element we can represent two bits at a time. So the result was in the signal element duration, compared to example C, we were sending two bits per signal element as opposed to one bit per signal element. We got a higher data rate, with all other factors being the same. How do we summarize that as a trade-off? The more signal elements to represent our data, the higher the data rate. In the first case, we had two different signal elements and we got 100 bits per second. In the second case, we had four different signal elements and we got 200 bits per second, all other factors being the same. If I wanted 300 bits per second, what would I do? Keeping all the other factors the same, how would I get 300 bits per second? How many signal elements would I use? How many different signal elements? Eight, eight signal elements with eight different values, we think we can represent a combination of three bits per each value, 000, 001, through to 111. There are eight combinations when we have three bits. So if we had eight different signal elements, we'd have three bit per signal element, therefore we'd get 300 bits per second, three bits per 0.01 seconds. More generally, we often refer to instead of the number of signal elements, we talk about levels of the signal. So in the first case, there are two levels because often we may show it as different magnitudes, but it doesn't have to be. But in general, we'd say this one, first case has two levels, two signal elements, the second one has four levels or four signal elements. And if we wanted 300 bits per second, we'd have eight levels or eight signal elements. Increasing the number of levels, keeping everything else the same, increases our data rate. Another key trade-off that we should be aware of. That is if we want a higher data rate, we can do two things. Increase the bandwidths and or increase the number of levels we use when we encode data to the signal. Let's keep going. Any questions on that one? Let's keep going and look at some other trade-offs that we've seen and some new ones. Click through. Remember we've gone through, we started with a single sine wave and then we had added two components together to get the red one. Then the signal three, we added a third component to get this green one and we kept going and we ended up with four components and then an example with 30 components and we got almost a square wave. Look at them, I can find them. Let's look at all four of those. Assuming all factors are the same, all we're doing here is increasing the number of components. Can anyone remember the spectrum of the red one? Two components, we had components at frequencies of two and six hertz. The component frequencies were two and six hertz and gave us a bandwidth of the difference between two and six, four hertz. So we've calculated that before. The green one, it was the first two components plus a third, two, six and 10, giving us a bandwidth of eight hertz. The fourth one, spectrum, we had two, six, 10 and 14 hertz giving us 12 hertz bandwidth and the bandwidth of the black one which had 30 components I calculated before. We will not do it. Anyone, did they calculate it? I think some people did. 116 or something, 116 hertz. You can check that, you can write down all 30 components and you see it goes from two up to 118 hertz. There were 30 components. So we're increasing the bandwidth here and we're using this scheme of, we're approximating ones and zeros as a high and low. Still the same scheme, one, zero, one, zero. They all represent the same sequence of four bits. So we can think each signal is approximating, setting a high signal for one and then low for zero. A square wave. Which of those four signals is the best approximation of a true square wave? The black one. This is closest visually to a true square wave. So we'd say it's the best approximation of what we really want, which was high, low, high, low. The red one is also high, low, high, low but it's not quite as accurate, we'd say. So a conclusion we make here is that increasing the number of components with all other factors being the same increases the bandwidth in fact. So increasing the bandwidth creates a signal which is more accurate in representing our data. The black one we'd say is more accurate than the red one in approximating the true square wave. Increasing bandwidth increases accuracy. Why do we care about accuracy? We know we care about data rate, how many bits per second, what's accuracy of relevance for? It's of relevance if things go wrong. If we have some noise, some impairments as well, the goal of the receiver is given the receive signal, map it back to the original data, the more accurate the signal, the less chance of making a mistake in mapping it back to the original data, the less chance of errors. If you have a less accurate signal like the red one, with some noise added into this, there's a more chance that the receiver will make a mistake and think for example, one level is the opposite and we'll get the wrong bits. So we say increasing the bandwidth increases the signal accuracy which reduces the errors. It decreases the chance of errors in that case. So that's why we care about accuracy, especially when we have impairments. So these are some things that we can do to improve our communications, to increase the data rate or decrease the errors. We've seen these trade-offs. Let's keep adding a few more. What's a bad thing about increasing the bandwidth? Cost, okay, increases the cost. What's a bad thing about increasing the number of levels? Going from two levels to four levels, we got a higher data rate. What's a disadvantage of using more levels? More complexity at the receiver, maybe it's harder to implement. That's true. What else? Because the receiver must interpret, both the transmitter and receiver must be encoded such that they will generate all of those levels. The more levels, the harder to implement. And the complexity means the algorithms to do it may take longer. So yes, correct. I'm not gonna write it down though. So he's correct that it may increase the complexity. But there's maybe a more significant thing when it comes to our data communications. I don't care too much about implementing it. You're all good programmers. You can implement things in an efficient manner. So let's ignore that for now. But something else. More levels decreases the accuracy. More importantly, increases the errors. Why? I don't think we have a good picture to illustrate. But with more levels, the receiver must correctly interpret what they receive to being one of those levels. This one we said we went from two levels to four levels. It doesn't show it very well. But the idea is that the more levels, when we add in some noise, some impairments, it means with more levels, there's less difference between each level. So with less difference between each level, there's less of a margin of error for us to make a mistake. Maybe we can try and draw that and to illustrate a very quick drawing. Let's say our levels are just in terms of magnitude. And let's consider a digital signal. Here's the first signal, two levels. Let's say this represents a one, zero, zero and one. So we see the mapping of high level to one, low level to zero, with a digital signal in this case. Let's use four levels. And let's first define the mapping. With four levels, we're gonna have something like very high, high or positive, low and very low. As opposed to our just in this case, high and low, or very high and very low. We would actually define the absolute magnitudes of those. Here we had high was one, low was zero. So we could have a different mapping here. Very high is one, one. High maybe one, zero. Low, zero, one and zero, zero. So now let's generate the same sequence of bits. We have one, zero. So we have a high signal. And then zero, one, which is a low signal. All right, so here's two examples of two levels versus four levels. And let's add some more data just to make it some more levels be in the example. The next two bits are one and one. And in our second scheme, that maps to very high. One, one. The first, the more levels, the higher the data rate. Within the same period of time, we can transmit more bits. We've already said that. More levels, higher data rate. The point we're trying to make is the more levels, the higher the chance of errors. Why? Because what the receiver has to distinguish in the second case is between the four different levels. So it receives a signal which is around here. Which level does it correspond to? Low or very low? There's only a small difference between the two when we have four different levels. So there's a more chance of making a mistake. Whereas with two levels, if it receives a signal which is around here, then it's highly likely that it's the low signal of zero. It's very unlikely to be a one. So when we take this same space and split it into more levels, there's a more chance that we'll make a mistake. More levels, higher data rate, but more levels increase the chance of errors. On average, we'd have more errors under the same conditions. We've got a few more to add, but let's return to the slides and talk about impairments. So the thing we finished on last lecture was transmission impairments. And we said there are different types of impairments, but two we're focusing on is attenuation and noise. And we'll go to the summary picture that captures both of them. Transmitter transmits a signal, the black one. As that signal propagates across some distance, it attenuates, it gets weaker. So the blue one is maybe it's the signal that's been attenuated. So it depends upon the distance by how much it attenuates, but it's going to be weaker, so a smaller amplitude. So if there was no noise, the receiver would receive the blue signal. But the other impairment that's significant is noise. Noise is all the other sources that the receiver receives in addition to the signal. So the receiver will receive the blue attenuated signal plus the red noise. And noise varies, there are different sources we mentioned. It's often seen as random variations because of the background noise, plus in some cases in this case I've drawn an impulse, maybe some short electrical disturbance leads to a peak. So there's some noise in the system. What the receiver gets is the summation of the noise and the attenuated signal, which is the purple one. And the receiver gets that received signal and maps it back to the data, assuming it was 1, 0, 1, 0 transmitted. The receiver measures the signal for each signal element duration. Measures here are high most of the time, must be a 1. Good. Measures here, and you have to zoom in to see, is it high or low? Well, for this part it's low, but for this part, because of that noise, it jumps to high. So it's possible that the receiver measures that and sees, well, the average value is high. It's a bit 1 I received. Then here it's high again, bit 1 received, then low, bit 0 received. 1, 1, 1, 0. Now there's a bit error. Transmitter was 1, 0, 1, 0. That noise may shift that second bit from a 0 to a 1, causing a bit error. So there's a problem of noise and attenuation. Of course, the more noise, the more chance of bit errors. If this noise is larger, it can flip the bits from low to high and vice versa. So there's a new trade-off. Increase noise, increase errors. That's a bad thing. So we'd like to have fewer errors. So that's a disadvantage, increasing noise. What else? What leads to more noise? Well, there are different sources of noise. There's noise, sort of background noise, or thermal noise. There's noise due to other transmitters. There's noise due to lightning strikes or other electrical disturbances or errors in the system. So they're different sources. Generally the background or the thermal noise, the larger the bandwidth we use, the more noise introduced. So this adds some more confusion between the trade-offs here. With a larger bandwidth, we usually get more noise. So see this. Increasing the bandwidth increases the noise, but we've just seen increasing the noise increases the errors, implies increasing the bandwidth increases the errors. But increasing the bandwidth decreases the errors. That's confusing. We haven't said by how much. So we haven't said increases the errors by 10% or doubles. In fact, to compare all of these trade-offs, we should really look at by how much does it increase the errors to be able to see whether, yes, increasing the bandwidth from the signal perspective decreases the errors, but increasing the bandwidth allows for more noise, which can increase errors. To know whether we get a total increase or a total decrease of errors, we need to know how much they change them. But we're not going to look at that in this topic. But as people who design and select communication systems would need to analyse that in depth and see, okay, we want high bandwidth for fewer errors because of the signal. We can have more components in the signal. But the more bandwidth we add, the more chance or the more noise comes in. So we need to make sure that we don't allow too much noise in to increase the errors to overcome the game we get from the better signal. This decrease of errors was due to the more components in the signal, the higher signal accuracy. This increase in errors is due to noise, but two different factors. So there's a trade-off there. How do we overcome noise? Let's say it's very quiet, there's not so many students here, but this morning in the lecture, many students, many students talking, how do I overcome noise? What do you think I can do? Assuming I can't control the students from talking, they don't listen to me. Not change my frequency, change sport. Talk louder. I could change my volume or my signal strength. I could talk louder or turn up the amplifier to overcome that noise. So that's another factor that is if we transmit with a higher signal strength with respect to the noise, we can increase the amount of data we receive or decrease the amount of errors that occur due to noise. So that's a good thing or a way that we can improve performance. I say increase the signal that the receiver receives, the strength I mean, the signal strength or power. We will see an equation shortly that it can increase data rate. Will I fit it in? You will see. Increasing the signal strength can effectively increase the data rate. Note that errors and data rate are related. If I have a data rate of 1,000 bits per second and an error rate of 10%, I transmit 1,000 bits per second, but 10% of those bits received are in error. 100 bits are received in error. Effectively, I've only really transmitted 900 bits. So given a particular data rate, say 1,000 bits per second, if of those 1,000 bits received 100 or wrong, means 900 are correct. So really my data rate is 900 bits per second, my effective data rate. So in fact, when we decrease the errors, we're really increasing the data rate as well. We transmit with a higher signal strength to overcome the effects of noise. What's the problem with increasing my signal strength? I turn up my microphone. What happens? Well, different things may happen. There may be some distortion if I turn it up too high, but we haven't talked about distortion. There's a class next door. If I turn up my microphone, then they start to hear my signal and I create noise on them. So I'm interfering. We say this is interference on some other entities communicating. Cross talk or interference is what it's called. So increasing your signal strength can also increase noise. Usually noise on others, which increases errors. More noise, more errors. Increase the signal, less errors. So we need to be able to measure those values and how much to determine under what cases should we have a high signal and what cases should we have a high bandwidth. Right. So increasing the signal helps us but impacts upon someone else. Now, if we consider the entire communication system, that may have a negative impact in total. That was shown on this slide. This co-channel interference or interference in general, or cross talk. Two transmitters wanting to send to two separate receivers TX1 to RX1, TX2 sending the green signal to RX2 but they interfere with each other. So RX1 receives the blue signal from TX1 but it also receives interference from TX2 which is in fact the green signal. And they add together and given attenuation this is what RX1 receives and you can see it's not a good representation of what it was TX1 intended it to receive. The interference will result in errors at the receiver. It would not know what bits were transmitted. And similar for RX2. So this is what we sometimes call in wireless systems interference and in wired systems it's usually referred to as cross talk. It can happen with two different wires that they can also interfere with each other. The energy dissipates out of the wire and onto others. I think we've got most of the trade-offs there. But you can see that they are conflicting in some cases. It's not simple as let's just increase the bandwidth. We must consider several factors. It's complex. Any questions on those trade-offs before we move on to the last part of this data transmission topic? You may have a look at this example. We will not go through it now. It's just an example if we transmit some data or a particular signal and then we add in noise. This is what we receive. So we receive this signal and what the receiver does is then maps that by sampling is the signal low if so and be careful. In this case a low signal represents a bit one the opposite of what we've used. High signal bit zero. And what the receiver does it samples. Okay, the signal here is low then for I've received a bit one. That's correct because bit one was sent and it keeps going. At this point it samples and it measures the signal strength to be positive. High. High means bit zero received. That's a bit error because in fact bit one was transmitted. So this is just showing the impact of noise. It leads to bit errors. These two slides we're going to skip over. We may return to them in the next topic on transmission media. They just introduced some notation about transmit power and a few other factors but we'll introduce them when necessary. They're not necessary at the moment. To finish this topic of our trade-offs that we've listed we said they don't indicate by how much we increase the data rate. By how much do we increase or decrease the number of errors. We would like some equations that tell us by how much. Well it turns out that people have developed some equations and two famous and rather simple equations to tell us a relationship between data rate, bandwidth and some other factors. And that's what we talked about as channel capacity. What do we mean by channel capacity? Capacity we mean the maximum data rate we can achieve. In the same way that the capacity of this lecture room is 60 students. The current number of students is 12. So with both cases counting number of students capacity is the maximum we can fit in. In a data communications data rate measures bits per second the capacity is the maximum number of bits per second we can send in some communication system. And the channel is the thing that is characterised by a particular bandwidth a particular frequency usually. So a link is a simpler way to think of a communication channel. We've seen the trade-offs, they're complex. Let's look at two equations that combine some of them together. They're called the Nyquist capacity equation and the Shannon capacity equation. We'll go through each of them with some examples. Mr Nyquist did analysis of communication systems and come up with this equation in the centre of the slide. C equals 2B log base 2 of M. His analysis assumes that there is no noise. Let's assume that the noise is so low so insignificant that we can say it's zero. Now there is never no noise. There's always some noise. But to make the analysis simple he assumes there's no noise. In such a case he determined that the data rate or capacity C for capacity measured in bits per second is equal to 2 times the bandwidth measured in hertz times log base 2 of the number of levels M. M is the number of levels in our signal that we use. Remember the equation and know how to use it and we'll see how to use it with a simple example. Everyone remember dial-up modems? No? Can anyone remember dial-up modems as the one that made the strange sound when you connected? Anyone seen it in a movie or something? A dial-up modem? No? Everyone's getting a little bit too young. So before ADSL modems we had dial-up modems that basically similar to an ADSL modem you connect your telephone line into it from the wall the telephone line comes in and connect your computer to the modem. Sometimes the modem was built into the computer. The point is that the modem takes you the data from your computer and sends it as an analog signal across the telephone system, across the telephone line and it sends it to your ISP, your internet service provider. And the telephone system was designed to support a bandwidth of 3,100 hertz. So the signal transmitted across the telephone line had a bandwidth of 3,100 hertz. Why? Why would a telephone line support a bandwidth of 3,100 hertz? Anyone want to guess? Telephones have been around for 100 years or so. What have they used for mainly? What data is communicated with telephones? Voice. The typical bandwidth of a human voice is about 3,000 hertz. When we speak it ranges from hundreds of hertz up to about 3 or 4 kilohertz. So the telephone system was designed only to carry that range of frequencies. But now we want to connect our computer and send bits, not just voice. So we have a bandwidth of 3,100 hertz. What's the maximum data rate we can achieve with our dial-up modem? Well, let's start. Let's assume our basic signal scheme of high for bit one, low for bit zero. Find the data rate. And the hint, use the Nyquist capacity equation. I shouldn't have labelled them A and B. A is your modem. B in this picture is your, say, the modem of the ISP, your internet service provider. B, sorry, here is bandwidth in this Nyquist equation. It's not location B. We have a link between the modem and the internet service provider. That link allows us to send signals with a bandwidth of 3,100 hertz. Let's assume that we use a signalling technique that gives us just two levels. High for bit one, low for bit zero, for example, or the opposite. What's the maximum data rate we can achieve using this modem? Well, the Nyquist capacity equation. Just plug in the values. The capacity, the maximum data rate, is two times the bandwidth times log base two of the number of levels in our signalling coding technique. Two times 3,100, 6,200. Log base two of two is, if I remember your logarithms, two to the power of something equals two, one. So it's just become 6,200 bits per second. Bandwidth is measured in cycles per second or hertz. Here we get bits per second. Capacity or data rate. So that's just an application of the Nyquist capacity equation. It says, if we have a channel with a bandwidth of 3,100 hertz, which is your typical telephone system, you can't send more than 6.2 kilobits per second if you use this signalling coding technique. You can't go higher. There's no way under these conditions. And this assumes there's no noise. If there was noise, you wouldn't even get this level. It'd be less. No one remembers the dial-up modems, so no one's going to remember what was the higher speed that they got. If you go back in history and find one of the most recent dial-up modems, they got to a speeds of about 56 kilobits per second. When you bought one, 56 kilobits per second using a telephone line. But we just said that our capacity is 6.2 kilobits per second. How did we, those modems, get a speed of 56 kilobits per second? What did they do? We said that we got a... The maximum is 6,200 bits per second, but I'm telling you, the real modems, they could get up to about 66 kilobits per second. So, not frequency, look at the Nyquist equation. Use higher or lower, what? bandwidth we can't change. Bandwidth is a characteristic of that telephone line. If we change... and use optical fiber, maybe we can change the bandwidth. But if we want to use our telephone network with the old modems, we're stuck with 3,100 hertz. So we change what? M, the number of levels. Find the number of levels necessary to get 56 kilobits per second. So if C is 56 kilobits per second, find M. How many levels did that modem use? Assuming no noise, how many levels would be needed? Well, we know C, we know the bandwidth, we don't know M. And you need your calculator there. To find M, let's find at least log base 2 of M. So we've got 56,000 equals 2 times 3,100 times log base 2 of M. Let's rearrange. So 56,000 divided by 2 divided by 3,100 gives us about 9. Let's approximate to 9. That's a nice number here, very close to 9. I think it wasn't exactly 56 kilobits per second, it was maybe 55 points, something, something. So let's say that gives us 9. Rearranging, log base 2 of M is 9, therefore M is maybe logarithms and exponentials. Log base 2 of M is 9, therefore M equals 2 to the power of 9. 512. So those dial-up modems that supported about 56 kilobits per second, what would they do is they would transmit a signal of a level of out of 512 possible levels. When they transmit a signal, that signal element represents 9 bits at a time. And with the same bandwidth, we get a speed up to our 56 kilobits per second. How can we increase further? Okay, so what happened is that with the dial-up modems we're getting 56 kilobits per second. It turns out to increase that using the same bandwidth is very hard because let's move up. Here we have 9 bits per level, or 512 levels. So the next one up would be 10 bits per level. How many levels? If we try to increase the capacity, if we went up to M equal to 1024 to get 10 bits per level, then it would become which is 10, which gives us what, 62 kilobits per second. So increasing the capabilities of our modem, getting a new modem which handles double the number of levels, so it's complex, more chance of errors gives us an increase from 56K up to 62K. And at that point in time another 6 kilobits per second was insignificant. To get that extra 6 kilobits per second you have to go to a lot of effort. And it turned out that people come up with a new technology which ADSL is one of the variants that we use. ADSL doesn't necessarily increase the levels. ADSL transmits a signal which uses not just the voice frequency of the telephone system, it uses a range of other frequencies that can also be sent across the copper wires. So yes, the telephone system was designed to support carrying voice about 3 kilohertz, but the copper cabling inside your telephone network can carry a much wider range of frequencies than one or two megahertz. So that's what ADSL does. It transmits a signal up to about one megahertz. Increasing B to one megahertz you can see you can get a jump in the data rate. So the new technology allowed different types of signals utilised a higher bandwidth. So Nyquist Equation relates data rate to the bandwidth and the number of levels in the signal scheme we use. It assumes there's no noise. In reality there is some noise and that's effectively going to decrease the data rate below the capacity we calculate. We said it's 56 kilobits per second but with noise it's going to be less in practice. But maybe only slightly less. So to take into account noise Claude Shannon did analysis and come up with another relationship that is maximum data rate C, bandwidth B but instead of looking at the number of signal levels focusing on the signal and the noise power the amount of noise especially and denote it as SNR which is signal to noise ratio and the equation is above that. Signal to noise ratio like the name suggests is the ratio between the signal received the strength of the signal received and the strength of the noise received. We said increasing the signal can increase the data rate. That was one of the trade-offs we noted we tried to explain that that is the stronger the signal you receive relative to the noise you receive when the signal power received goes up because for example I turn up the volume then the signal to noise ratio goes up. If the noise is the same but the signal goes up the ratio goes up and as the signal to noise ratio goes up the capacity goes up. Higher signal power with everything else the same higher capacity. And the other trade-off we saw we said increasing noise increases errors and another thing of increasing errors is decreases data rate. More people start talking in the background more noise if the noise goes up that you hear this ratio goes down if the denominator goes up then the ratio goes down and if the ratio goes down the capacity goes down. Higher signal higher capacity more noise lower capacity. That's what it shows there. So let's see it in our last example see it being used this equation and we'll modify this one a little bit. We have a channel that uses a spectrum of between 3 MHz and 4 MHz. What does that tell you? The first phrase bandwidth is 1 MHz. The spectrum from 3 up until 4 bandwidth is the difference. The bandwidth is 1 MHz. With a signal to noise ratio and here's where we'll change it in this question I used decibels dB but we haven't talked about them so we'll change it with a signal to noise ratio delete the dBs for now and set the value to be 251 in a later topic we'll explain how to convert from an absolute value to a decibel value but if you convert to 24 dB it's equivalent to a ratio of 251 we'll see that later 251 so change that and now the question is how many signal levels are required to achieve the Shannon capacity there are two steps how many signal levels well that hints we need to use Nyquist capacity somewhere because Nyquist includes M in the equation but to achieve Shannon capacity we must also first use the Shannon capacity equation so let's first find the Shannon capacity in such a channel bandwidth 1 MHz SNR 251 plug it into the equation so x sending to y in this case we have a link or a communication channel we have a bandwidth of 1 MHz and a signal to noise ratio of 251 we'll plug it into the equation in a moment but let's first explain what does this SNR of 251 mean well quite simply it means that the signal received if we measure the strength it's 251s times greater than the noise received so x transmits a signal y receives that signal let's say we measured the power levels and the received signal with another s just as an example let's say it's received at a strength and I measure it 502 milliwatts so y the computer y receives the signal and it's got some measurements of the signal strength signal strength can be measured in volts or watts you know watts in milliwatts and y measures the received signal to be 502 milliwatts and it also measures the received noise so at a different time it's measured how much noise it receives how much noise what's the value of noise I received the actual signal which is the data that x send has a strength of 502 milliwatts but I also receive other signals which is the noise what's the strength of the noise 2 2 milliwatts given the signal to noise ratio is 251 it means the signal must be 251 times larger than the noise so the noise must be 2 milliwatts in this case note that the signal to noise ratio is a ratio it's dimensionless there's no units here it's simply 251 so that's just an example of what the SNR means if the signal received 251 microwatts then the noise would be 1 microwatts and we'd have the same SNR signal to noise ratio the capacity equation from Shannon only cares about the ratio between those values not the absolute value now let's use the equation what we call the Shannon capacity will be we know is 1 megahertz 1 million 1 by 10 to the power of 6 log base 2 1 plus 251 approximately so all I did was replace B with 1 million and SNR with 251 log base 2 of 252 is about about remember your powers of 2 2 to the power of 8 is 256 256 and 252 very similar so log base 2 of 252 is about 8 slightly less so we get 8 times 1 million 8 megabits per second approximately approximately 8 megabits per second note that the bandwidth is in hertz or megahertz in this case the capacity or data rate is in bits per second megabits per second the question was how many levels do we need to achieve the Shannon capacity what is the Shannon capacity 8 megabits per second how many levels do we need to achieve that use Nyquist capacity we want to know M we already know the bandwidth the channel is still 1 megahertz and we also know the capacity it's 8 megabits per second 8 megabits, 8 million equals 2 times the bandwidth which is 1 million, 2 million so log base 2 of M equals 8 divided by 2 equals 4 M equals 16 2 to the power of 4 is 16 so this is just saying that given that particular channel where we knew the signal to noise ratio we knew we could achieve up to 8 megabits per second to do that we must use a signal that encodes with at least 16 levels if we use less let's say 8 levels there's no way that we can get 8 megabits per second even with no noise so it just combines the two in an example and that's a good place to finish we've covered the trade-offs amongst the different characteristics of signals and we've finished with two specific cases of those trade-offs relating data rate and other characteristics using Shannon and Nyquist capacity equations