 So, the first approach we'll look at is we want to get a secret from between our different entities and we're going to use symmetric key encryption to do that. That is, if I want automatic exchange, by automatic exchange of keys I mean send a key across the network. If I do that, it must be encrypted. We can't send an unencrypted key across the network. So, how do we encrypt it? In this first approach we'll send secret keys encrypted using secret keys across the network. Our aim is that two entities share the same secret key and we build upon a principle that we can't use that same key forever. That is, if I do exchange a key, I don't want to keep using it. I would like to change that key sometime later. So, we need a mechanism that will support the changing of the key which doesn't require much manual intervention. It doesn't require the human to do much. So, we want to change keys frequently. So, we'll see two approaches we'll go through on the next few slides. First, a decentralized key distribution where we manually exchange master keys so the human goes up and programs in master keys does that once and then when we want to encrypt the data, we don't use the master key but we generate a session key, a second key and exchange that session key automatically. That is, we send the session key across the network to send anything across the network. It must be encrypted. We encrypt it using the master key. So, it will be manual exchange or distribution of master keys and then automatic distribution of session keys. And when we need to change keys, we change the session key and that's not such an overhead because it's done automatically. The human doesn't have to change them. We'll see that a good thing about this, it's decentralized. It doesn't rely on other special servers or third parties but it has some problems we'll come across and we'll see a more common approach is using a key distribution center. We introduce a third party, another special server in the network that supports the key distribution and we'll see that there's a manual exchange of master keys with the key distribution center, KDC, and then again automatic distribution of session keys. What about master and session keys? What's the idea there? So we have layers or a hierarchy of keys. We don't just have a single key commonly. We'll use the master key to encrypt the session keys so that when we exchange those session keys, an interceptor cannot see them. Then we'll use the session keys to encrypt the data. So when we send the data across the network, the attacker cannot see it. And the idea of having these two levels of keys is that we'll use the master keys for a long time. We will not change them much and they may be manually distributed. So we'll go to some effort for distributing the master keys but then keep them for a long time whereas the session keys which is used to encrypt our data will automatically distribute and change them on a regular basis. If we're going to change keys, how often do we change them is a design issue? Generally we assume the more often we change keys, the shorter the lifetime of that key, the more secure our system is. I use a key to encrypt one packet. I send the packet. The next packet I send, I encrypt with a different key and send it. If for some reason the attacker can find one of my keys, they can decrypt one packet but they cannot decrypt the other packets. So the idea of changing keys is that if something goes wrong, some of our data may be compromised but other data may still be secure because we use different keys. So a shorter lifetime for a key is more secure but the process of changing keys involves some exchanges and some overhead. So the shorter the lifetime, the larger the communication overhead for doing the key change. So there's a trade off there. What are the best values? There's no one answer. What's the best one? Sometimes it depends upon the applications or the communication patterns. Some examples, if you're using something like TCP, you set up a connection before you exchange data and then you close the connection at the end. So you access a website before you download or send a request for the web page. You set up a TCP connection, then request the web page, download the web page and then close the connection. Such connection-oriented communication protocols, maybe we use one key for each connection. The next time we set up a connection, we use a different key. So that's an example. Other protocols make change keys after some fixed period of time or some certain number of packets are sent. After 10 minutes, change the key. After a million packets have been sent, change the key. So it depends upon the pattern of communications, that's how often. So we're going to go through some protocols for automatically exchanging keys. The notation we'll see in the next few diagrams. We'll talk about the entities communicating the end systems like user A and B or computer A and B and we'll often denote each entity has an ID, a unique ID, IDA, IDB. What is the ID? Maybe it's an IP address of the computer or an IP address and port number or maybe it's a username of a user, your email address or something that uniquely identifies you in the network. So it depends upon the application and the network setup but the ID we're assuming is something that uniquely identifies that entity in the network. We'll have master keys and sometimes I'll use slightly different notation for master keys. In one of the diagrams you'll see KM to be a master key exchanged between A and B or KA and KB, I think in the second diagram. And session keys like KS and we'll introduce a nonce value. Let's see the first approach, decentralized key distribution. The aim here is that between a pair of entities, A and B, they want to exchange a session key. So they need a secret and they need, both sides need to know it and we want to do that automatically and the steps to do that are illustrated in this diagram. So the way to read the diagram, there are three messages sent between A and B, then labeled one, two and three and the contents of those messages are listed. Remember that the double bars is concatenation, which just means message one contains two pieces of information, the identity of A and the value N1. So let's go through and see how this works and then talk about the advantages and disadvantages of this approach. The assumption before this exchange takes place is both A and B have a master key. So before there's any keys exchanged, the known information, A knows KM and B also knows KM. Then what's known at the start of this exchange is KM at both sides. Of course everyone knows their own identity. We both know a master key. Our aim is to both A and B learn a session key, KS. Now this is a little bit confusing in that we already know a shared secret key. A and B have a shared secret, KM. Why exchange another shared secret? Well, this is the difference between master keys and session keys. The master key is going to be used to encrypt only a session key. It's not used very often. But when we have data to be exchanged between A and B, let's say we have millions of bytes to be sent between A and B, all of that data will be encrypted with session keys. Maybe one session key or maybe after some time will change the session key. So even though we already have a shared secret, KM, we're going to exchange another shared secret. So when we say KM was known at the start, that was manually distributed. A master key was manually exchanged. That's the assumption in this case. Then we go through this protocol, which is automatically done. It's implemented by software, AMB. And the steps are user A sends a message to B saying, I am user A, my identity is A. And it has the meaning is I want to exchange a session key with you. And it attaches a second value, N1. N1 is called a nonce value. And back to the slides, what a nonce is, is a number that's used only once. So nonce is short for number used only once. We'll see it come up in many of these protocols. So a number used only once. What's an example? A time stamp is commonly a number used only once. We take the current time on our computer clock, represent it as a number. The next time we do something, it'll be a different number. And the next time and so on. So we can think that's a number that's only used once or a counter. The first time we send a message, we use the value 1. The next time we send 2 and 3 and 4. Or even a random value. The first time I send a message, I choose a random number. The second time I choose a random number and it should be different. Is that true? Are they numbers or are only used once? If you generate a time stamp, it's only used once. When we send it in the packet, can a time stamp be repeated? Yeah, using the same. I send a packet now with one time stamp. I send a packet some later time and it has the same time stamp. Possible, why? If the generation of the packet happens at the exact same time, yes. So one issue is we need a clock that is granular enough that it goes down to as small as timeframe as possible such that each event we do is a different time. I don't know, nanosecond accuracy or maybe microsecond accuracy. So we need a clock to generate time stamps. Let's assume we have such a clock. When we send a packet with one time stamp now and then send a packet with another time stamp later, can it have the same time stamp? Yes or no? Hands up for no. Can two packets or two different times have the same... have two packets at different times have the same time stamp? Hands up for no. Hands up for yes. They can have the same time stamp. Why can they have the same time stamp? Ah, okay. We'll pass it on. Right, well, I'm not sure if that was right there. Well, first you need to consider what is sent across the network. We're talking about a packet which contains this value. Now, in theory, a time stamp will be always changing. So two different times will have two different time stamps. But in practice, when we encode that time stamp and send it inside a packet, it's usually a fixed length. That is, maybe I represent that time stamp as a 10-bit number because my packet, I don't want to include an arbitrary size value in the packet because it will get larger and larger. So in practice, it may be possible to repeat because we wrap around. With a 10-bit number, there is only two to the power of 10 possible values. The same applies for a counter. The very first packet I send has counter value zero. The next packet one, the next one two and so on, well, we think the counter is never the same. But if we send that counter inside a packet, after sending billions and billions of packets, we maybe have to come back and wrap around a zero because the amount of space that we want to use up in the packet is limited. So we'll assume that they only happen once. In theory, in most implementations, they may be repeated, but just not very often. So that's where we say a number used only once, even a random value. If the random value is chosen between zero and one, there's two possible values, we will use it the same number again. But if the random number is a 256-bit value, the chance of us choosing the same random value again within a reasonable amount of time is very low. So we consider a number used only once. Why do we use it? Commonly to stop replay attacks. I send a packet. I include a number used only once in that packet. If I want to do a similar thing later, then the nonce value will be different, and the receiver will be able to distinguish that the second instance of the packet is different. If the attacker did a replay attack, they took one message that was sent before and simply resend that exact same message, the receiver will see this nonce value is the same as before. Therefore, it's a replay of the original message. So it's commonly used to identify replay attacks, and you'll see it through the upcoming protocols. So here's our scheme. User A sends its identity and a nonce value to B. Let's say just for this example, the nonce value is a random number. Maybe the ID was an IP address. So a random number, maybe it was long, but I choose a random number of N1. Let's say N1 was 51683. Just for this simple example, ID identified user A. B gets this message and realizes, ah, we need to exchange a session key. So what B does is it generates a session key, KS. Random session key, so it can just generate a random number of a certain length, and that will be a session key. And then it sends back message 2. Look at message 2. Message 2 contains the session key, KS, the value we just generated. The identity of A, the message came from A. We send back a response saying this is 2A, and it's from B. So we include our own identity here. We include a new nonce value, N2. Let's say I choose a number for N2, just a random number, but we also, to prove that we've received the first one and to allow A to identify that this response corresponds as a response to the first message, we include usually a function of the first nonce value. What's the function? Well, it could be as simple as increment by one, just to indicate that we know what the first nonce value was. And you'll see that's especially useful in the third message. So as an example of how this may be implemented, F of N1 may be 51683 plus one. To let A know that this message is a response to the first one. So the identities are included, the session key importantly is included, the two nonce values, or one's a function of the first one, and all of that is encrypted with KM. KM is the master key that both B and A know. So if someone intercepts this second packet, they cannot see KS, it was encrypted with KM. A receives the second packet, A has KM, it decrypts the second packet, and it learns KS. We know the session key, we've achieved our aim, but again, we do a final check, or a final confirmation with this third message, again to prevent replay attacks. What is the third message? It's a function of N2, let's say N2 plus one. We received N2 as 18603, we send back 10604, 18604, and we encrypt it with the session key. It's approved to be that we got that initial message, or second message from B, and we have the session key. And when B gets that, it knows that A has learned the session key because the only other person that knows the session key is A, because the only other person that can see KS is the person that has KM, and KM was a master secret shared between A and B. So this is the three steps for distributing KS. The role of the nonce values here is if we didn't have them, there are opportunities for someone to just replay an old message and to trick a user into using an old session key. And we said we want to change session keys in case one is compromised, so we don't want to use old session keys. Maybe it was compromised, and the attacker is trying to get us to use it again. That's the steps. They are performed automatically. Let's consider regarding the number of keys, the performance of this approach. Let's consider an example network with 1,000 users, 1,000 computers or 1,000 hosts, and we want it end-to-end encryption. Don't worry about the applications, just encryption between users or the end devices. How many master keys are needed in the network? What we've shown in this protocol is the session key exchange between A and B. If it was between A and C, it would be the same steps, but different key values, in particular different master key. How many master keys are needed? What's the equation? This is the exchange between A and B. If we have 1,000 users, we need an exchange between user one and two, user one and three, user one and four, and so on. There are 1,000 users. How many pairs of users are there? 100 times 999 divided by 2. About 5 million. 4995. Good. 499,500 master keys. That's a lot of master keys that need to be set up at the start. And these master keys are exchanged manually for this system to work. So that was the assumption at the start. Before we did any automatic exchange, we manually exchanged KM. How many session keys are needed? At any one time. Note that these were manually exchanged. How many session keys are there in the network at any one time? The maximum number? The same number of master keys. Every pair of users has a shared master key, KM, and they also have a session key. They use the master key to encrypt the session key and then use the session key to encrypt their data. So the number of session keys at any one time is the same. But with the session key, that was automatic. That is, it was exchanged across the network in an automatic way. And it's easy to change because it's an automatic exchange. That is, today I have half a million session keys. Tomorrow I may have a different half million session keys. And the next day we perform this process between every pair and generate new session keys. This concept of change your keys as often as possible to aid in the security. And because we have automatic exchange of those session keys, that's possible. What's the problem with this scheme? The master key is very hard to exchange up front. We still require half a million keys to be exchanged up front. This scheme is okay if we had a small network, maybe tens of users, maybe up to 100 users. But once the network size grows, a thousand users, there's just too many master keys that need to be manually exchanged. We need either a small network or we need a way to automatically exchange the master keys. And we'll see in another approach, we could use Diffie-Hellman, for example, to exchange master keys. So use public key cryptography. But sticking with symmetric key cryptography, this decentralized approach doesn't work so well with large networks. What's the solution? Use a key distribution centre. Use another entity in the network that will assist with the exchange of session keys. And by introducing that, we'll reduce the number of master keys necessary.