 Let's consider a second example of the pigeonhole principle. Let's consider a cryptographic hash function a hash function H take some input we'll denote as x and produces some output or say is y and Typically a hash function will take an input which is Signific is quite large compared to the output value For this example, let's say that the input x is a 100-bit number and The hash function produces an output which is 64 bits in length Now in fact in practice hash functions will normally take an input which is a much larger than the Output hash value and be usually of different lengths But in this example for just for simplicity I'll assume that the input is 6 to be 100 bits and the output is smaller than the input and the output is 64 bits That's our hash value. So when we Take the hash of some input x1 we get some output y1 and Typically with a hash function if we take a different input We should get a different output and that's a required a desirable feature of hash functions is to map different inputs to different outputs But the pigeonhole principle tells us that that's impossible That is it tells us that there will be some inputs that map to the same output here we have n equal to to the power of 100 that's the number of possible inputs so the number of objects that we start with and We map them to hash values or the places m and they're true to the power of 64 possible output values so we have a number of objects Larger than the number of places and the pigeonhole principle tells us that at least one of those places must have at least Two or more objects in them that is at least if we take Two different inputs there must be two inputs that produce the same Output value in hash functions. That's called a collision So we can not avoid collisions when the number of inputs is larger than the number of possible hash values with a cryptographic hash function normally the the requirement is that the mapping of inputs to hash values is random Given that we can calculate approximately how many on average Inputs map to the same hash value. It is n divided by m or in our case 2 to the power of 100 divided by 2 to the power of 64 Which is 2 to the power of 28 or approximately 6 by 10 to the power of 10 So the pigeonhole principle tells us if we have a hash function which takes an input which is fixed at 100 bits in length and it randomly maps those inputs to 64-bit hash values Then on average there will be 6 by 10 to the power of 10 or about or 64 billion Inputs that map to the same hash value That is will always have collisions