 So, today I will talk on the signature scheme without forking lemma. Basically, this is a very efficient scheme and the fact that we are not using forking lemma is making it more efficient, but the details of it we are going to see in the course of our talk. A digital signature scheme consists of a signing algorithm and as you can see in the slide top it is PKI based. PKI means public key infrastructure. In a PKI based system, every user is responsible for creating his own secret key and the public key. Nobody is going to help that individual and so everything is responsible for it. So, all the secrets of the secret key everything is responsible. Nothing is done outside it is done his backyard and then he creates the secret key and the public key. So, that is PKI based system. In the signing algorithm the input is the message M, the secret key and some ephemeral values. By ephemeral values I mean the temporary random values which he is using during the signing process and then finally the signed document or the digital signature is produced. We denote that by sigma. So, it requires secret key as a parameter. So, it can be produced only by the owner of the secret key and that is the purpose of a signature. Basically, it binds the message with the signer. If anybody else could do it, it is called the forgery. So, it is only by him it should be possible to do. So, secret key is a parameter and as in the paper signature scheme, we have a verification mechanism and verification can be done by anyone. It is a public process. So, verification algorithm takes the signature, the message and the public key of the corresponding user and then it outputs either accept or reject. So, if somebody simply brings a document claiming that it is signed by so and so, the verification algorithm will be used to check the velocity of the statement, the signature, the message as well as the public key of the person which is claimed as the signer will be given as input. Obviously, it should reject if it is not generated by the person that is claimed, it should reject. If it is indeed generated by the person with the corresponding secret key therefore, this p k here and s k here is a pair, secret key, public key pair of an individual then it accepts. So, the verification algorithm accepts if it is actually produced by the corresponding signer and it rejects if it is a forgery or if it is generated by somebody who does not own the secret key and so on. This is the very generic, but then how do we prove? In cryptography in the early days typically something is secure is argued by some combinatorial complexities saying that it will take so many million attempts to break the system and so on and such things obviously are not scientific, they do not have any rational and there is no justification for simply counting certain possibilities arguing that it is very difficult to crack. Unfortunately, that is how it was done for a while, but later on people try to formalize the manner in which we can argue for the security. Security is an intuitive notion. In straight forward manner we can say that it should be impossible for somebody to forge the document or create a forgery or something like that, that is the normal way to express our desire, but how do we formalize it? How do we mathematically and rigorously state the impossibility? So, the body of cryptography which worries about formally arguing for the security properties called provably secure cryptosystems. So, in provably secure cryptosystem the way in which we do, this is a general framework this is the way it works. What we normally do for most of the impossibility results we handle the scenario through reduction. So, basically assuming that there is a forger available and he can create a forged document, then we have a reduction algorithm which will take the forged document and then it will solve a hard problem. It will produce a solution in hard problem all will happen in polynomial time, but that will be a contradiction to the assumption that the problem is hard. Basically, the problem is known to not to have any polynomial time solution and such hard problems and their hardness and their assumptions form the cornerstone for designing the provably secure system. In fact, we are in a kind of a slippery rock at this point, there are many problems whose exact complexity status is not known, neither it is NP completeness nor polynomial time solubility. Take for example, the famous one this is called the factoring problem even an integer n you have to factor them into prime factors. This particular problem we do not have yet a polynomial time algorithm nor we have any concrete hardness results or complexity results available. So, we call these things as hardness assumption we assume factoring is hard. Similarly, there are several reference problems and these reference problems they act as the fundamental tools which will allow us to establish this contradiction. So, let me give let me discuss about a very simple problem which I am going to use as the hard problem in a cyclic group given a group G which is cyclic and G a generator. If G is a generator that means all powers of G would produce all the elements of the group. Therefore, given any element x there exists a y such that G power y is x because it is a cyclic group and then because it is generator all powers of generator would exhaustively generate G for any group there is a unique integer. This integer we can now write something like this discrete log of x to the base G is y. So, this is a group element and this is a group element and this group element to the base G is written as an integer y that is because x equal to G power y and finding this number you are given a group element x you are also given the generator and you have to find out what power of G is this given element x and finding this is called the discrete log problem and this discrete log problem is complexity status is exactly not known but we assume that it is very hard. So, this is called the discrete log assumption. So, we do not know whether there is a polynomial it is polynomial time solvable at the same time we do not have the any idea on the exact complexity. Similarly, so the basic assumption here is that the discrete log problem is hard there is a way there is a formal way in which we can put this when it comes to the formal reduction. Although we say in English sentence saying that there is no polynomial time algorithm and so on a formal way of saying this is given x and a G where a is any polynomial time algorithm. It is not going to produce the output y a x G is not equal to y for no for any polynomial time algorithm you are not going to get the output y this is impossibility impossibility is stated in this way but then the more formal way to say this is in terms of randomized algorithms. So, if a is any polynomial time algorithm probability that the a x G would output y is very small it is extremely small it is extremely small that it is you will think almost that it is not going to happen it is impossible. So, we say let us see let us put less than or equal to epsilon where this is a very small number. So, a is any polynomial time randomized algorithm. So, for any polynomial time randomized algorithm you can give x and G and the probability that you would get y is negligibly small or extremely low. This is how we formally state the impossibility or hardness of solving the discrete log problem in polynomial time. So, the contradiction is if you demonstrate a polynomial time algorithm which could produce y as an answer with a non negligible probability or with a decent probability it is a contradiction to this and that is the kind of a contradiction we will strive and finally, demonstrate the security property of the protocols the digital signature scheme or encryption or any protocol any crypto protocol finally, it will be related to the hardness of certain problems and that is how the reduction technique or provably secure crypto system techniques work. So, we will always be directing ourselves to contradict a scenario of this kind. So, this is what one should remember as a bare bone basic rest of the thing we will anyway this is little technical let us not worry about the detail here I would put this in even simpler form to make things more accessible. In the last slide we have seen that there is a reduction there is a way in which you are connecting hard problem and the security of the hardness of a problem and security of the crypto system. When does a problem become hard I have already commented that factoring is hard of course factoring is very easy if you give 77, 77 can be factored as 11 and 7 in no time. Factoring is hard only when the numbers are large how large it should be to make it really hard. So, this is where a parameter called security parameter is introduced to describe the degree of difficulty as the number of bits become larger and larger it becomes harder and harder. So, there is a parameter that will capture that hardness and when it becomes really large it is not possible to solve with any tool that you have in reasonable amount of time. For example, if the number of bits that are involved in n is let us say 2048 then to factor such a number it may take more than 30 years or 40 years with the best possible algorithm that we have. So, that gives you a kind of a guarantee security guarantee in real world. So, the security guarantee in real world is measured in terms of the security parameter. The reduction which is established theoretically connects the degree of difficulty of solubility to the degree of difficulty of breaking the crypto system. To put it simply the forging ability and solubility of the problem is related by the reduction. But let us take a closer look at the reduction. For example, if the reduction is quadratic what does it mean? It simply means the following basically in the reduction you are going to relate the inputs and some computation and so on. If the reduction is a quadratic it means the following. If you want to have a security for your digital signature scheme then because the reduction is a quadratic let us say to the factoring problem it means the following. You can have a 1024 bit factoring level of hardness with 1 million bit parameters and numbers used in the digital signature scheme. Because the reduction is a quadratic you can have a 1000 bit degree of difficulty of factoring is same as 1000 square bit numbers that are used in your digital signature scheme. That means your digital signature must use the parameters which are extremely large. Theoretically the reduction will tell you that if this is hard that is hard it cannot be easy because if it were easy that would be easy. Since that is not easy this is hard this kind of a theoretical reduction one can always establish and while deriving these theoretical reductions people are convinced on the potential of the scheme but in the real world the effectiveness of the reduction has lot to say. So for example if the reduction is a quadratic this means to achieve this level of difficulty to break your signature scheme. The signature scheme must use numbers which are of this many digits or bits this many bits which is impractical working with the 1 million bit numbers is simply impractical. So the modern trend the latest trend in the research is you show the degree of you prove the security of the crypto protocols through reduction but then make the reduction as effective as possible. If the reduction is weak then your scheme is not considered to be practically viable. Theoretically it is secure yes eventually it can be made very hard but to make it even reasonably hard you have to use excessively large parameters which makes the scheme impractical. That means we have to work with a scheme that relates in a very close manner the degree of breakability and the degree of difficulty should almost be same. In that case I will have the matching sized parameters used and that means even with smaller parameter I can achieve higher level of security. Therefore the focus is on the quality of the reduction. This slide is commenting on one particular well known reduction technique that is used in digital signatures and the digital signature schemes are all proved in the classical manner using a lemma called the forking lemma. But the forking lemma is unfortunately connecting up the degree of difficulty with a cubic function and because it is connecting with a cubic function it makes the things extremely impractical. While you are convinced that it is theoretically secure to achieve even reasonable level of security the argument or the reduction forces us to use extremely large parameters. So the mission here is to design a system that is very efficient and then you must come up with a very clever reduction scheme to the hard problem in such a way that they match in the complexity. So this is the formal definition of the tightness of the reduction e and epsilon are close and they are not quadratic or something like that and I will come to the security model. Let me go straight away to the design of the system. Let me give the description of the system. What are the ingredients? The mathematical tools that we are using in building up our system. Our protocol is extremely simple. It has got only four steps. It is using these two ingredients. One is the bilinear map and another one is cryptographic hash functions. I will spend a minute on each one of them so that you are familiar with the way in which these things are used in our scheme and security and so on. So we have groups and we have a bilinear map. The bilinear map you can see is a map from G1 cross G1 to G2. These are all the groups and the bilinear map has got the following very interesting property. I will write down and I will save it here because I would use it again and again. E of x plus y z is E of x z times E of y z because E of AB is you can see that G1 is additive group and G2 is a multiplicative group. Therefore the images are getting multiplied in G2 here. So if this is added then it is multiplied in this. The E has such a property. Similarly, E of x z1 plus z2 is multiplied by E of x z1 times E of x z2. This is multiplied. So by putting x equal to y you see that E of 2x z is E of x z square. In fact E of A x z is E of x z power A for any integer A and which can also be worked out with respect to the E of x A z is again E of x z power A. These are all the simple properties and very important is therefore E of A x z is same as E of x A z. This A has been moved here. You can see that and from this the symmetry property also follows and so on. So basically the bilinear maps play around with a pair of groups. In fact there can be even three groups. I can have one group for first parameter, another group for the second parameter and third group can be used. The pairing functions the groups that are to be chosen for pairing functions is itself a huge area of research. The one which are to be chosen for the cryptographic applications, their computation, efficiency all of them are a huge area of research by itself. In fact pairing based cryptography is practically a sub-area of research. It is so huge so we are not going to look at the detail. Let us assume that we have chosen the appropriate group. Let us also assume that a pairing function is also frozen for our purpose. Our scheme will work with any pair of groups, any bilinear function. It really does not matter as long as it satisfies certain minimal properties and we are going to use only these properties in our discussion. That is the reason why at high level we can ignore at a very high level we can discuss and ignore all the lower level details, what kind of pairing functions and how to efficiently compute which group to be chosen and all those details we will ignore for the time being. The second ingredient is the cryptographic hash function used. In data structures you are all familiar with the use of the hash function but in cryptography the kind of hash function to be used are completely different. The hash functions must have two important properties. I will list down those important properties. It should be impossible for you to invert in polynomial time. That is if hash function is a function from a to b typically cardinality of a will be way larger than cardinality of b. Cardinality of a will be way larger than cardinality of b. Therefore many elements may converge to a same element but it should be impossible for you to find the inverse in polynomial time. Such hash functions are called cryptographic function that is given h of x. Finding x in a should be impossible or should happen with negligible probability and so on or in no polynomial time effort will allow you to discover this x. This is invertibility. This invertibility should be impossible. I will use the word impossible but rigorous mathematics is all about improbable or low probability event and so on. Everything that I state as impossible is actually improbable or with low probability event or negligible probability event in the formal mathematical discussions. Similarly given h of x finding a y such that h of x equal to h of y find y this is called the collision. Since a is cardinality of a is way greater than cardinality of b surely collision will happen but finding a pair that collides should be very difficult for you. Such cryptographic hash functions are called collision resistant hash functions. So we have pre-image resistance, collision resistance and there are several other nice properties for the time being I am going to use only these two properties. Any hash function again hash function design is a huge area of research. There are challenges, annual events and hash function standards and evolving the design of the hash function itself is a very separate enterprise because this is going to form the basic unit of computation in almost all cryptographic protocols. That is the reason why there is an extensive research that is going on in the design of hash functions with appropriate properties. But again as I have said earlier we will work with any hash functions that have satisfy this property it does not matter which one we use. The hash functions that we are going to use have the following structure. It will take a message the bit l m is the length of the message so m bit message it will take m bit and a group element and it will map to a group element. There is another hash function which will take the same kind of an input but it will map to a number is that p is modulo p. So you divide by p and find the modulo so you get the numbers from 0 to p minus 1 and you get those numbers and that is this set. Therefore this is simply a number p is a very large prime number so this hash function maps to a number this hash function maps to a group element that is all. So we need only two such hash functions and the hash functions have the properties which I have mentioned here they are collision resistant and then so what I mean by collision resistant is it is almost impossible for you to find a pair satisfying h of x equal to h of y. These are the only things that I need now I am yes yes yes capital p is a generator of the group the reason why we used capital p there is g 1 is in general could be an elliptic curve points on the elliptic curve and it could be arbitrary group. So p here is a generator for the group and for any group this discrete log is an integer because this is a generator this is another group element x and g are group element but this is an integer and therefore that integer is a smaller than the order of the generator and so on. We are going to work with a group which is extremely large a prime order group that is the reason why I have is that p is used here which one? No capital p is the generator this p is the size of the group g 1 size yeah it is the order of that group oh I am sorry it is causing any notational mix up basically that is a group element and this is a number which represents the size of the group. So here is the simple way to generate the private key and the public key choose two numbers s 1 and s 2 they belong to z p therefore they are numbers s 1 and s 2 are numbers and p 1 and p 2 are the group element because s 1 p and s 2 p. If you look into the notation here I have used the notation g power y this is simply if any integer g power a is the notation that will be used for the generator applied with the group operation a times if it is a multiplicative group the notation will be g power a if it is an additive group g operated on with g a times will be represented as a g. So both these notations are used a g notation if you find that means you are working with additive group g power a if you are using that means you are working with multiplicative group that is all it is only a notational one. So the fact that we are working g 1 additional group says that this is s 1 p and s 2 p that is p plus p plus p that is the group operation applied s 1 times you get group element that will be one key again the generator operated on itself s 2 times gives another group element these two group element are the public key. So the private key are two numbers and public key are the group element. Notice that from the public key getting that private key is the discrete log problem given a g and g finding a is the discrete log problem although the discrete log is more visible with this notation the problem is same even if it is put in this notation given a g and g finding a is extremely difficult. So from the public key information you cannot get the private key information obviously that is a minimal requirement and that kind of a guarantee is offered by the degree of difficulty of discrete log problem. So you cannot get the secret keys from the public key but that is besides the point that just for the sake of discussion I made this remark. Here is the signature scheme just as I promised you only four steps. What do we do to sign a message choose a random number r in z p this is called the ephemeral key it is called the ephemeral secret or temporary secret. Secret key is permanent but this r which is using during the process of generating the signature will not be revealed and it will act as a temporary secret that is why ephemeral secret or ephemeral value r and you multiply r with the second public key you get y m then m and y m is a hashed and multiplied again r times or added r times and you get x m. So recall that h of 1 is a group element so here is the h of 1 is a group element because it is a group element r times that group element gives another group element therefore x m is a group element. Q m is h 2 of m and x m and this is a number hash 2 is mapping the pair to a number therefore Q m is a number so you have a number s 1 is a secret key which is a number so these are all integers so mod p p is the prime number so d m is an integer the signature is d m and x m. So x m is given and then the d m is given x m is generated in second step and d m is generated in the fourth step you do just this computation and send it across that is it it is very simple very simple signing algorithm it has got two group operators and you know one group operation sorry two group operation r p 2 x m requires a group operation y m is group operation and two hash evaluation and then integer multiplication and that is all you have to do and you have generated the signature. The signature is also a group element and an integer the size of the group is also same as the size p so 1024 bit prime number if you are using 1024 bit for g and 1024 bit for the number and the message can be of any size there is no restriction on the size of the message because message is not playing any role explicitly they all become the parameter for the hash function therefore I have no restriction on the size of the message and it is possible to use this system to sign any document of any length. So the signature is very compact absolutely compact but how do we do the verification here is the verification algorithm the key for the verification is in the second step forget the correctness and other things focus on the second dot in the second dot I am checking is e of x m p 2 equal to e of h 1 m y m if this holds the signature is valid otherwise it is invalid why it is so so this is where the pairing function related computations are done I will spend exactly one minute in making you to understand why this is the case. So now you can see that you have x m you have p 2 and the way in which they are related and the way in which we are playing around for the values over here the a over here is moving to a here the same thing is going to happen if you look at the way in which it is y m is y m is r p 2 y m is multiplied by r and x m is also multiplied by same r so x m is again the same r multiplied by h 2 of m x m h 2 of m x m whatever it is it doesn't matter so they are multiplied by so this r is going to move here I do not require the knowledge of r but if I have these values I know it should satisfy the relation y m p 2 x m this I will not have an access to the value of r but these four values must satisfy the following condition what is the condition it has to satisfy that is what is given e of x m p 2 e of x m p 2 it should be equal to e of h 1 of e of sorry h 2 of m x m and h 2 or h 1 e is h 1 sorry because x is created with h 1 it is h 1 of y m the reason is simple what is h m it is simply r times h 1 of m x m now this r can move to the other one so this one will be here when r moves to the second coordinate I get r p 2 but r p 2 is y m and that is what is over here therefore simply moving the r to the other one will get you y m when you remove r from here you get this so a x z is x a z is the fundamental property so I am using this for checking the validity the basic bilinearity property which exists among the group element I am using for the verification algorithm so notice that I am using pairing function only for verification it is not used for the generation pairing function and computation related to pairing function are all very expensive they are much more expensive than the group element operation signing of the document can be done by me any number of times but verification is done rarely only when there is a dispute or something like that and verification can be done by any source so it can be even outsourced and it can be done by other powerful resources as well therefore the verification algorithm if it is complex it does not matter but the signing algorithm should be simple and that is also satisfied by our scheme signing algorithm has got only basic group operation and elementary number theoretic operations verification algorithm involves definitely a pairing operation but then it really does not matter but now you can see that if the signature is properly generated then this will be the case if it is not properly generated such a thing will not be satisfied and for someone to generate such a valid path is also going to be very difficult that is what we are going to see shortly so basically verification tells you that if the signature is generated in a sensible fashion then this must be satisfied so the genuine signatures will satisfy this condition it is very easy to see that the genuine signature will satisfy this condition but what is very important for us to show is that a person who does not have an access to secret key will never be able to produce a pair XM or sigma which would pass the verification test if we could do that that means he could forge right therefore we have to argue that it is impossible for anyone to produce a signature so that is what we are going to do next and here is the well this is generally summarized in the form of a theorem do not worry about it and let us see how we formally prove the security of it this details are going to take about roughly 5 to 10 minutes of discussion and that is all it is going to take and then I can put all of them together and summarize the point. The way in which the security proves are done in cryptography is a delightful mathematical exercise rather than getting it done using the traditional mathematical ways of lemmas and sub lemmas and theorems and other things the formal way of proving security is done in the following way let us take a real life situation what is happening in the real world let us assume that I am trying to forge the signature of let us say Bill Gates I will try to collect lot of samples of his signed document which might be available or I may have some access to some of his signed document maybe I am involved I am managing the server in which he has lot of signed document generated and so on so the adversary let us assume as an access to lot of signed document the question is having got the access to lot of signed document does it empower him to generate a forged document. So that is what we have to show it is not possible adversary may have any amount of information he may have the information about the public key he may have even lot of samples in spite of all of that it is not possible for him to forge that is what we have to prove. So in order to prove the robustness of the system we have to prove that even if adversary has got lot of information besides the public key he does not have of course the secret key he is the third person still it is not possible for him to forge but how do we formally prove it here is the model. So here is a challenger and adversary and adversary will get lot of information from the challenger okay he is going to get lot of information we make that into a game between a challenger and an adversary and whatever he is going to get in the real world we pretend as if that is supplied by the challenger in real world I am not going to give a signed document to a person who is trying to forge but for the sake of mathematical discussion we will model a game where the adversary and challenger or challenger would like to make sure that his system is robust. So whatever the information adversary ask he is willing to give and this is called the training of the adversary. So in the training phase adversary will choose I want the signature of M1 I want the signature of M2 it could be in order to discover the weakness of your algorithm it may have some predetermined pattern I want the signature on 00000 I want the signature on 001100 I want the signature of 010101 and so on he may choose any message and then he may ask the signature and the challenger will provide that signature. So it is whatever he is asking call the signature query and whatever he is giving as the signature response. So in this way in the training phase a polynomial number of queries asked by the adversary will be satisfactorily responded by the challenger then the forger then the adversary produce a forgery this is the training phase and after the training phase A gives the forged document we want to show that such a thing is impossible. So how do we show such a thing is impossible if a forged document is impossible then I can solve a hard problem. So this is what I will demonstrate to complete the proof so A will ask any query and then C will respond obviously in order that this interaction to solve a hard problem the hard problem instance must be injected in the interaction otherwise whatever is happening if it has got nothing to do with the hard problem then this forged document will not help in solve the hard problem. Therefore in the interaction C is going to inject the instance of the hard problem in a clever way so that from the forged document he will be able to solve the hard problem. So how does it begin we first give the public key information then during the training the hard problem instance will be cleverly used for the responses and based on the response he is generating a forged document and then we demonstrate that the hard problem can be solved with the response. So that is why this is also called the simulation based argument why this is called the simulation based argument in the actual interaction what C is going to do is C will inject the instance of the hard problem which even he does not know how to solve therefore you have to only simulate the response. So I will illustrate that subtle point in the formal proof of security with this case study it is a very simple concept it requires a bit of a getting used to it but let us see the being the first interaction let me be very slow and then give you the idea just be with me for next five minutes you will be able to understand everything it is not complex fortunately the proof that I have developed or presenting here has no big calculation or computation or mind boggling computation it requires only subtle understanding of what is happening here therefore let us clarify them through interaction. The hard problem is called the computational Diffie-Hellman problem the computational Diffie-Hellman problem is you will be given A P you will be given B P so A P is a random element B P is another random element from this you have to compute A B P so A B are all integer so A is not known and B is not known. So you are given two element and you have to find out one more element of G and that element must be A B P this is called computational Diffie-Hellman problem in honor of the two researchers who have used this to kick start the so called modern cryptography the traditional cryptography were all anecdotal which is full of stories and false non-mathematical and irrigous argument and the modern cryptography started with the pioneering effort of Diffie-Hellman who studied this problem and introduced a novel technique from this and this is marking the beginning of modern cryptography. So in honor of them this difficult problem is called Diffie-Hellman problem or computational Diffie-Hellman problem. So this is a hard problem so you should trust me on that this is an assumption we assume that given A P and B P finding A B P is extremely hard. Now I am going to demonstrate if a forger exists I will be able to solve this problem in polynomial time and that is a contradiction remember that. But how do we demonstrate the whole contradiction? So let us start with the instance of the hard problem and a forger exists let us say and somebody claims that hey I can forge give me the system now the C is going to give the system the C should give public key what is a public key it is one is the public key is let S 1 P S 2 P S 1 and S 2 are numbers. So what C is giving is this A P he gives as one public key and then for the second public key he chooses a random S 2 and computes S 2 P and gives S 2 P. So notice that C does not know the complete secret key he knows only one secret key C does not know A and C has no way of computing A also because A P is an instance of the hard problem and then he has no way of solving the hard problem. But A P as a group element he will use as a first public key S 2 P S 2 he knows and these two he gives as a public key can you break the system is it possible for you to produce a forgery the forger says yes of course it is possible for me give me the training give me the sample signatures I will give you the message and you should give me the signature. Now notice that in order to run the signing algorithm he requires the secret key. Anybody who wants to produce a signature means he must have an access to secret key but now the challenger does not have the full secret key because one is the instance of the hard problem A he does not know S 2 he knows of course but A he does not know. So how will he give the training he has to only simulate he will not actually produce a signing algorithm he will not run the signing algorithm but he will produce values in some clever manner which will pass the verification test therefore he will be able to send mathematically consistent result back he is not he is producing in a different way he is not producing by running the signing algorithm that is why this is also called the simulation based proof the reason why it is called simulation based proof is the C is going to give the training purely based on simulated results and not by actually running the algorithm. In order to enable this simulation process he is going to play one small trick and that defines the very characteristic of the security model what is he going to do remember I have told you that the hash functions are having the collision resistance property and non invertibility and so on. Here is one function which is going to work in this manner this is called the random oracle that is randomly choose an element from B and give that as a response. So do not work with any specific hash functions work with an idealized hash function and that is called the random oracle random oracle means oracle means it gives a response random oracle means it will give a random response. So a random oracle modeling the hash function from A to B is you give an x belong to A what is h of x? h of x is an element in B but I am not going to compute with any function or something like that I will pick a random element and give that as a response for that. You ask h of y I will pick a random element and then give that as a response for h of y and in order to make this as a function I will maintain a table and this table will be polynomial because the game is going to be played only polynomial number of time. So every time I am only going to handle the hash functions not with the hash functions but with its mathematical abstraction namely random oracle and that random oracle has got the property that it will have the collision resistance with least probability it will have the invertibility with least probability it is obvious the reason is I am picking a random element from the range and then giving that as a response therefore the probability that the collision will occur is the least the probability that you will be able to find an inverse is least and so on. So random oracle model abstracts the hash functions as random functions and this is what I am going to use. So using random oracle as a abstraction I will be able to cook up the training and I can play around in the following manner. So look what I do here is the way in which I do the hash function I do not know whether it is clear the first hash function is h I will choose a random number and I will multiply by bp h1 is a hash function which is producing a group element so I have to only produce a random group element. So how do you produce a random group element I take bp which is another instance of the this is a fixed element but I will h i bp this element I will operate random number of times h i I will choose and then I will give that as a response. So the random oracle works in the following manner it will not run any hash function algorithm. So what c will do is that you want a hash value I will give you the hash value do not ask me what hash function I am using I will give you mathematically consistent hash function value whatever the query that you ask whatever the signature you ask I will use the hash function which is mathematically consistent with it and so on it will have the collision resistant property and I am using some collision resistant function and do not ask me what I am using but you will get what you want the proper training you want. So that is the way the game is played now this is where the second instance of the hard problem is injected and for the other one random number you generate therefore the second hash function is very trivial the first hash function is a random value multiplying the instance of the hard problem is used and that is all and now we are ready to quickly go to you can see the way in which the simulation of the a adversary sends the message to c. C randomly picks dm qm and h three numbers he selects and computes ym in one way h1 in another way and xm he defines in this way notice that h is selected by him s2 is selected while key and ym is computed just here and now c sends xm qm as a signature notice that xm and qm is sent as a signature but xm and qm is not obtained by running the signing algorithm it is obtained in a very funny way right but it will satisfy I have cooked up the xm and dm value in such a way that it passes the verification and that is what is demonstrated here very simple demonstration h s2 and s2. So s2 and s2 would cancel and h can move here and the hp is here and ym is over there that is it very simple absolutely simple mechanism we use and produce a pair which is h2 h2 h2 h2 h2 h2 h2 h2 h2 h2 h2 going to act as a legal signature it will look like a signature it is a signature where he will send an m I will send a pair he can verify it will satisfy and all these things are possible because of random oracle if random oracle is not there since I do not have the secret key it is not possible for me to run the signing algorithm and produce a signature this is the crux of formal proof in the simulation proof demonstrating the perfect simulation is the unique part of cryptographic protocol and their security in the np completeness and hardness proof of reduction you take one input and then convert into another input here it is not just that we have something much more in the reduction is to be done because we have to simulate the real world scenario simulate the scenario where the adversary has an access to lot of samples what kind of samples he will have an access I cannot predetermine that it could be any sample whatever the sample he could lay his hands on he gets it so to mathematically model that he is capable of getting the samples on any message in our game we ask a to send a message m and c to send the response so c has to cook up the response and mathematically you have to demonstrate that a response is possible for you to generate and if you could generate that it is a perfect simulation this is an example where the simulation is perfect again I am calling it at simulation because I have produced them not by running the signing algorithm but by cooking up values picking up and doing some computation and simply producing a pair which will convince him so this game can go on he can choose any m and send to c and c will send a legal looking signature back to him therefore he is equipped with whatever he wants and once he has everything with him he can now produce a forgery with a decent probability let us assume that he is able to produce a forgery and the next couple of slides will show how the forgery is going to be used to solve the hard the problem and that would complete our discussion so here is the details of little bit of mathematics or calculation to show how from the forgery document or from the forgery you can indeed get a b p the mission is you are given a p and b p you have to compute a b p but in the process of for the process of computing the challenger has interacted with the forger which is assumed to exist and then here you have given a training and then the he has produced a forgery the forgery is this so notice that p 1 p 2 is a p and s 2 p a is not known s 2 is known and the forgery document is x m star d m star and x m star and d m star is a valid forgery therefore it should satisfy this equation if it is satisfying this equation then these kind of conditions must be satisfied which simply means the following your x m star will be like equation 7 r star h star d p it will be like that that is what this simple two step computation shows the x m star that the forger has sent will have this mathematical property similarly the d m star will have this mathematical property you can show by extremely simple calculation that d m star will be like a q m star plus s 2 star whatever is that a d m star will be like this so how do I use x m star and d m star which has come to me to solve the hard the problem that is the next slide and with that it will be complete so notice that d m has got a b see here is the d m has got a q star so when I multiply d m star with b p I get a b p informally now that is what the computation is doing so d m star is multiplied with b p but d m star is a q m star plus s 2 star when you multiply with b p you get a b p and of course q m star is there so I am dividing by q m star here and here I have additional term s 2 star and that can be subtracted out by this computation all these values are known all the values are known therefore this will automatically get cancelled which shows that a b p can be obtained therefore what is the algorithm here is the new algorithm for solving the computational Diffie-Hellman problem given a p and b p interact with a forger polynomial time then the forger will give you a forgery for the document from the forgery document do this computation you have got a b p this is polynomial there are only two or three steps therefore in polynomial time given a p and b p you are able to get a b p and this is a contradiction this is not possible that means such a forger cannot exist anyone can have any amount of information he can have any kind of sample signature but still it will not be possible for him to produce a forgery if anyone could produce a forgery that would mean a solution of the hard problem in polynomial time as demonstrated by this game between the challenger and the adversary so challenger takes the hard problem instance injects them carefully so that make him work with the various hard problem parameter and these forging ability is converted into problem solubility for us so this is the crux of provable security and this computation shows if the forger succeeds in producing a forgery with non-trivial probability with the same probability I am able to get a p b as well therefore all the statements are now by connecting all the dots you now conclude that such a forger cannot exist because we assume that given a p and b p it is not possible for us to find a p b and that is the reason why I mean that would refuse the possibility of existence of a forger that means my system is really robust so to summarize we have a system that involves simple computation and the reduction is also direct reduction and whatever is the computing time for it and the computing time for the forgery they are they are not only polynomially related it is very direct computation and therefore whatever is the possible the epsilon of forger and epsilon of success in solving the problem they are the same which means both of them have got the same degree of difficulty which means to we can use this signature scheme even with the smaller parameters and obtain the same level of hardness of the computational Diffie-Hellman problem of whatever is the corresponding size so if you want to have a signature scheme that is as strong as 1024 bit computational Diffie-Hellman scheme or my signature scheme use the groups whose size is just 1024 bits that is good enough the same size is good enough and that is the implication of the reduction so basically it is the time for me to acknowledge the authors these are two of my PhD students Sharmila Devaselvi and Sri Vivek and this got the best student paper award and there are several other extension of this also reported subsequently and this has been made into lot more general we have used this for the design of an ID based system we have also used it for an aggregate signature aggregation means when there are several signatures instead of storing each one of them separately it would be nice if you can compress all of them and get a smaller document to store and whenever you want at a later point you should be able to recover from it so how effectively you should you can compress depends on the ratio between the total number of signatures and the resulting size and we got the optimum compression in the sense you can take an arbitrary number of signatures you can compress it to the size of a single signature with this kind of a scheme there is a winding scheme and an unwinding scheme all of them have been shown which is using this as the basic module and so on so several things have happened further to that and my objective is to give a simple expository talk on the basic idea and explain the provable security the art of provable security through simple case study so with this I would conclude my talk if you have