 So it's a pleasure to be here, sorry for, I guess given that this was supposed to be a long set of tutorials, I'm kind of old school so I like actually to use the whiteboard more even though later we will switch to PowerPoint once we have enough kind of background. So I will do all best, I just wanted to ask you where maybe they might even bring the third whiteboard if I use a lot of them. So just logistic questions, should I try to put them over here in the center and have people back or should I like keep them here or even back, I don't know, just organizers. So it's okay like this. If there is a third one I guess we'll put it in the center. So the reason I need at least two is that there will be like one table that we will continuously be filling throughout this tutorial. So in any way and so for now is it okay? All right, we'll readjust in the break if needed. All right, so as I said, it's a pleasure to be here. So I'm going to start today with kind of introducing some basic concepts about randomness. Some of the study, actually most of the study will be very theoretical. Having said that there is like a subject which is very dear to my heart but I'm not going to talk about but the organizers just reminded me about it. So a lot of this stuff it seems like theoretical and maybe like, you know, beautiful and so on but maybe not applicable to practice but a lot of this stuff, for example, we use, we have a series of works on building good random number generators like attack and Linux random number generator, proving security of Windows random number generator improving optimizations. So a lot of the stuff that is going to be covered here was like directly useful there. And I would dare to say this was like a highly applied set of work. So I believe a lot of concepts that you will learn here not just from me but from the other speakers will be hopefully useful. All right, so let's get started. So I'm going to keep one table here. For now, I will ambitiously use a whole board. If we need more board space, we'll see maybe in the break I'll make a slide and put it on this huge thing with this table. All right, so this is going to be the table that we are going to fill. And essentially, rather than splitting this tutorial against the school into just basic definitions where I will remind you what entropy is and so on, I just think we'll keep our eye on this big picture. And as we need to introduce something, we will introduce something. All right, so this is a table. So we'll have a bunch of rows and columns. So in the rows, there will be various cryptographic applications. So the applications that we will consider are not necessarily in cryptographic. So the first kind of task that we are concerned about in cryptography is just that people cannot lie. All right, let's try. I will do my best. Or maybe, all right, yeah, you're right. This might be important to write in maybe black. So all right, so the first row, I'll try to use bigger letters, but you have to deal with my handwriting. So this is what I call soundness, which is roughly speaking an ability of potentially untrusted people to prove false statements. So there is a statement. It could be the true or false. And we would like to make sure that we can catch some people if they want to prove some false statements. The second and third row will be more traditional cryptographic things. So here I'll use os to stand for authentication. So this is essentially usually, for this tutorial, I just think about message indication. I want to sign a message. I want to say that this message comes from me. So this is a task of authentication. The third task, which is essentially roughly speaking half and half of cryptography, is more generally, I'll call it privacy. And the last task will come a little bit later for now. I'm not going to fill it in. So these tasks, and this is setting. So the first setting that we are going to consider, just to guess everybody on the same board, is the setting of ideal randomness. So in this setting, we would like to see, so randomness in general, I will try to denote by the dollar sign. So this is a setting when we have random numbers. So essentially the point of the school, how important are random numbers for various tasks, ability to generate random numbers, ability to have good randomness. So let's just go through this over here. So in this case, so here for now by sound, as I said, I will talk about questions which have a correct answer. Does this graph has a Hamiltonian cycle? Or is Fermata's theorem true or false? Or something like that. So we'll have in complexity theory, usually we abstract it as a class of languages, languages which can be decided in polynomial time, and so on, or classes P versus NP. So I assume you are a little bit familiar with that, even though it's not needed for the rest, it's just for the big picture. So this table is going to tell you kind of a big picture of what we are going to study. So in this setting, if you have essentially randomness, you can actually do amazing things. So there is a famous, so the class in question is called the class of interactive protocols. So here we allow essentially somebody whom we call a prover to interact with some kind of verifier. The verifier is computationally bounded, the prover is potentially computationally unbounded, but the prover isn't trusted. So the class of kind of statements that we can prove is called the class of interactive protocols, a pretty big class called IP, and there is a celebrated result proved in the early 90s that this is actually a really huge class. So this kind of, so this is equal to class of languages which can be proven, which when you can put it using polynomial space, so it's a huge class. And essentially just shows you that randomness is important and indeed randomness is really crucial, as we will see in this kind of setting. So it's a really big class that we can do. So essentially for soundness, randomness really helps us a lot. And of course, moving to cryptography, we'll kind of come back to this a little bit shortly, but coming back to cryptography, of course for authentication and privacy, so while here randomness, well we will see how important it is, but for authentication and privacy, it's really critical because if you want to do cryptography, we need to have an ability to have secrets. And the only way currently we know how to model secrets is using randomness. So secret is something that the attacker doesn't know about, whereas it means the attacker doesn't know about it, it means the attacker has uncertainty about the secret. So technically from the perspective of the attacker, the secret at least somewhat looks random, right? So there is no randomness, some who there is nothing to hide. So in this case, and of course, there is a very rich theory about it, which I'll compress to a check mark. So we have a lot of theory of like master syndication called signature scheme, beautiful. A lot of result was done and essentially by now, we pretty much here were willing to believe that using good enough kind of randomness, we should be able to do those things and do them pretty spectacularly. And more importantly, the same holds for privacy. So privacy are these kinds of applications like encryption, zero knowledge, commitment schemes, anything or more even powerful ones like multi-party computation where the point is to hide something from the attacker. So here potentially we're not trying to hide, we're trying to assert who we are and just want to make sure that people don't have lies on our behalf and here they're trying to hide stuff, right? So essentially the point is for cryptography, randomness is completely essential just to even define things. And even for soundness as you will see as we move to the next column, it's very important. So which brings me to the second column before we set. So this is a column of the world with no randomness. So let's just see how this world looks like. So first I just want to kind of start with the easy part for essentially cryptography is gone. As I said, we need kind of randomness to model the ability to have secrets and so on. So at least the way we know there will be no kind of cryptography and even in complexity theory there are pretty kind of dramatic implications. For example, in the world of no randomness more or less we kind of collapse. It's a pretty large class, but still we collapse. Even if the pro-work can have interaction it doesn't really help. We collapse with a class of languages in non-deterministic polynomial time. So it's kind of a big drop off. Let me see what else I want to say about this setting. Okay, so we can kind of see at least there is a big gain in having randomness. So the question is, well, I've essentially done. What am I going to talk for the remaining four plus hours? So the setting we're going to talk about is essentially this realistic concern that so here we assume like no randomness. Here we assume ideal randomness. And ideal randomness is a very strong assumption. It means you have like a sequence of completely independent and biased random bits which are kind of unpredictable. There is like no correlation between these bits. So it's a very strong assumption and we need to get those bits somewhere and it's unclear where we are getting those bits. So we would like to weaken this assumption. And essentially what I'm going to try to do for the rest of the day, you try to think what are kind of some realistic, hopefully minimal assumptions about randomness which will allow us to do some of those amazing applications. So as a third column we are going to call the randomness. So this is like something kind of realistic. Let's see how much space I need but we'll see if I need more space, we'll move it here. So anyway, so this is a setting of weak randomness and so this is what I'm going to start explaining here. So what does it weak means? So intuitively we will assume that the random source is not necessarily the sequence of unbiased uniform random bits but it comes from a source. Usually we call it the perfect source. And in some situations the source is just assumed to be like a particular distribution which is maybe not necessarily uniform but that might be a little bit kind of too specific for us because usually we don't really know exactly if for example I want to derive secret keys from my fingerprint. It's not like I know exactly the distribution of my fingerprint. I might need some assumption about it but I might not know it exactly. So instead what we will try to do, we will abstract it and we will say that that our randomness comes with a source S. I don't know how to write it or L, I don't know whatever this letter is. Oh wow, that's actually pretty cool. Something is happening upstairs. I didn't realize it, all right. Now I'll try to write better so that I'm not embarrassed later. Okay, so what is a source L? And this is simply a family of whatever realistic, the quote thing is not essential to define it. So it's essentially a family of distributions. And if you make this family very wide or essentially allow like orbit restriction, it means that we are in for a very good and we can actually tolerate this family. It means that we have a good positive result that we assume very little about the source that contains lots and lots of distributions and some co-oplication works for all of these distributions so that's great. And if you make the source very restricted, for example if the source simply contains only one distribution, uniform distribution, well that's what we assume kind of in this column, we can get great results but this is kind of not really robust to kind of imperfections that happens in reality. So let me give you some kind of examples of sources which are realistic and I guess we'll call distributions X here. Distributions, so I'll give you just a couple of examples I don't want to spend too much time about it because as you'll see, a lot of the results that I'm going to present, they're kind of general, they're set on some general framework and then when we need to do something like very particular we will only then we will need to worry about as it details of the source. So the first source that I want to talk about is what we call KVX source. So here just to make it concrete, we assume that just for simplicity, just abstract it so we'll assume that our strings will be like n bit long so just to set some letters, I mean it could be variable length but just to get started I'll assume this. So the first sort which is actually a very prominent source and pretty much the ultimate source as you will see shortly is what we call a KVX source and it consists of all X such that for any little X in the domain, so I usually will denote by capital letters distributions and by lowercase letters particular values we have and we'll stick it here. Probability of X is at most two to the minus K and so one notation I will not use it that much but I will just kind of define it so we define predictability of X to be maximum over all the little X of probability X equals X and also as a first important definition as a min entropy over distribution X is log one over predictability of X. Okay so essentially this definition translated here it's equivalent to say that H infinity of X is at least K. So what am I saying here in case you haven't seen it oh thank you so you can move it. All right you can keep it all right so maybe in the break if needed or move the table thank you so much. Yeah that should be enough I hope. So all right so what are the same case you haven't seen it so actually let me see who just so that I know if I should kind of keep the current pace ago a little bit faster. So how many of you kind of have seen and worked with some kind of some notion of entropy and are relatively familiar with this. Okay so majority and who kind of haven't seen maybe that much and it's okay to I'll be able to remember. All right so there are a couple of people so it's okay so we will go too fast but we'll see so anyway so this is actually a very important notion of entropy for cryptography so I will not define Shannon entropy even though some well I mean I might define it later all right I'll define it here. So there is a notion of Shannon entropy which I will not use at all in this part for reasons that I'll explain. So this notion essentially it's sum over all x probability x equals x log of one divided by probability x equal x I think I didn't mess it up. So this is kind of an important definition the definition of Shannon entropy and I want to contrast it with definition of mean entropy and this definition roughly speaking for those who have taken information theory class it kind of tells you how many kind of random bits on average would a particular distribution have but in some sense it's not this is primarily useful when you have a lot of independent copies of this kind of random variable so in the limit you can achieve things like Shannon capacity and so on but if you have only one particular distribution this notion is not super robust because for example I could have a distribution where some element happens with probability 99% and the remaining 1% is spread over like a really huge kind of high support distribution and if you plug in this formula you will see that the entropy of the distribution is still pretty high it will be like let's say maybe n divided by 100 so it will be like linear in n but in cryptography or for like applications that I'm going to talk about if this is a source of this probability 99% it's constant essentially that Akkar knows everything that he's to know this probability 99% so yes this probability 1% you can do amazing things but not many of us want to bet our life on this 1% event so for things like in the limit and so on this is a really amazing notion but for cryptographic applications we want to make sure that no outcome happens with significant probability because if some outcome actually happens let's say this probability 99% or let's say this probability 1 8 then I can break the system with probability break the scheme with probability 1 8 so it means probability 1 8 corresponds to like mean entropy 3 and indeed if the entropy is 3 our intuition says you shouldn't be able to do good things so and indeed this is the right kind of notion of entropy at least not the right arguably the most natural notion of entropy for cryptography at the very least that Akkar shouldn't be able to guess our secret is probability if you want to do the minus k security at the very least this should hold and that's why this notion of mean entropy is so kind of useful and now turning back to our big picture so what does it tell us about our big picture so if we can design an application or cryptographic scheme or like kind of a protocol for proving some membership in some language or an encryption scheme which tolerates a weak source or k weak source and ideally it would do it for a pretty low value of k that's a pretty remarkable result that says under the minimal assumption that we can hope to achieve any security at all right because if you don't have mean entropy there is nothing that Akkar knows everything or he can like guess something with high probability so this is really the least we need to assume about all distributions in our source that they have some kind of mean entropy and we can have a scheme that surprisingly works so when I design the scheme I don't know exactly which distribution it is but the only thing I assume about it is that it has mean entropy then we're in really good shape so this would be like so ideal kind of goal so dream would be built application secure for k weak source for k much much less than n because that source arguably yes of course and people believe there is no randomness at all but you know for those people say okay there is no cryptography at all but if you want to make some assumption that there is some uncertainty in the world if you can make security for k weak source for value k much much less than n will be in very good shape so this is what we'd like to do and we will see in some cases answer is yes and some cases answer is no and in cases where the answer is no we can of course try to have other examples and I'm going to give you just a couple because this is not that important even though in the early days there was a lot of research trying to kind of define various imperfect sources and doing cryptography with it so essentially the game is yeah if this is kind of too general that you know you cannot have a scheme that simultaneously works for all this mean entropy k distributions maybe you can know a little bit about your source and you can have a scheme that works under hopefully reasonable assumptions and the problem with this result I'll mention this to you know couple of sources in a second the problem with this result is well this is kind of very clean and generally nice in practice it's a little bit hard to assume kind of too much structure and in two years a moment we get away from this thing we're starting to assume some kind of structure and people could arguably criticize whether this assumption is reasonable or not so I'll just give you one example just because it's historically important but it's a very unrealistic source and it will become clear why it's historically important later so this is a source at output bits and every bit is almost unbiased and it's like it's let's say probability of zero is between 49 and 51% and we don't assume anything else so the bits could be correlated well I mean we do assume something I will just write it over here I will just say for any i and for any x I'll give you like a stronger than normal motion okay I'll give you or for any x1, xi minus one probability capital xi equals zero condition on x1 capital xi minus one equals little x1 little xi minus one is between y minus gamma over two one plus gamma over two so and usually think of gamma like point I don't know or one for example so in this case everything is really between point 49 or I guess point 49 or something and point 51 something like that so this is like and in reality of course you know while for you know we don't expect every bit to be you know completely unbiased there will be maybe some bits which are determined by the previous bits and here we are really making a amazingly strong assumption so for those of you who haven't seen this before you would think wow it's like okay this is like a toy source it never happens but of course we would be able at least to tolerate that source and we will see that even that might not be kind of possible in some cases so the reason why this sort became important it was so clean and nice but it didn't make any independence assumptions and people were like really surprised when people started to prove impossibility of doing things from this source and you'll see some of those results coming later and there are more examples which alright I'll mention well I'll mention two more examples very quickly it's not going to be super relevant but there is one source which is called block source usually it's called KM block source so this is a generalization of the Santa Vasirani source and so he essentially you assume that the source x is equal x1 xn divided by m so the source comes into blocks of lengths m and the min entropy of each block so here I'm already going to use a notation which I didn't define I'll define it in a second so this is a very important notion of conditional entropy or in this case conditional min entropy but you know for the first like three hours that's the only entropy that I'm going to use so which essentially says that min entropy of xi conditioned on all the previous blocks is at least K so so roughly speaking we kind of assume that at least large enough or kind of if I'm as like large enough it's reasonable to assume that at least a sufficient chunk of bits maybe every 200 bits have I don't know 20 bits of entropy even conditional say on the past in this case so some who it's not like there is like extended periods of time where there is no new entropy extended but we don't assume that every single bit is between like 49 and 51 so here just on the aside I'm going to erase this thing because I don't need it for this I just kind of wanted to remind you what the Shannon entropy is and so I'll write it here so predictability of A given B equals B actually I guess I can define it over here so all right so let me define predictability of A conditioned on or let's say X conditioned on Y to be expected value over the choice of Y of predictability of X conditioned on Y equals on Y so essentially I'm saying for every particular outcome of Y so there are these two related variable X and Y and we say that for first we say you know for every particular output of Y we can see what's the most likely guess of X but because you know there is this joint distribution of X, Y and Y kind of comes on average we're kind of saying overall what is the best chance the attacker can get X when Y is chosen at random well essentially for every particular Y the attacker will choose the most likely outcome and on average but you know sometimes he could be lucky as another depending on this little Y so on average this is the expected value he will get this kind of probability of guessing X given Y, yeah well so first of all here okay so I can justify for which application it's enough so in some applications it's not okay in some applications we need to make stronger assumptions but in majority of applications that we are going to cover this is kind of okay because in general when we talk about cryptography we usually talk about average case so for example we're gonna say okay assume my secret key is chosen at random of course you know if the secret key is let's say all zero string and the attacker hopes that the secret key is all zero string he can break it of every particular secret key he can kind of break it but overall usually we kind of say okay the secret key comes from some kind of distribution and on average you need to do it overall over random choice so here for example yeah so this Y could be maybe the always you know so this could be let's say signature of zero and X could be signature of one so there is like overall some kind of process under which like the secret key is chosen and let's say the attacker learns signature of zero and he has to predict signature of one based on signature of zero he wants to force signature of one so overall this is kind of an edge of course sometimes he could be like really lucky for some particular signature of zero he will know exactly signature of one but as long as it's on average because this is how we define in general a game things kind of happen at random we choose keys or whatever randomness at random and you see what is the chance attacker breaks the system or something you're right yeah no because okay good good good so because here remember Y is well when I say it's not okay because so we're talking about usually in terms of applications they're talking about different goals so here why we're talking about the maximum probability because here we're thinking about X the goal of the attacker is to guess X or like forge a signature or something in this case the attacker will choose the most likely outcome I mean yes there could be something including the probability 99% and the other is probability half half percent he could like choose half percent then he will succeed with probability half percent but probably he will choose 99% so in this case the attacker will select the maximum thing but here usually when we talk about things conditioned on so this is a kind of thing that the attacker learns so here's like what the attacker wants to do and this is what the attacker knows and what the attacker knows I mean we have well we usually define an average case because for every particular worst case I mean sometimes we don't but usually for kind of a cryptographic scheme it's reasonable because this will be like the average case kind of thing so anyway so this is like the definition of this block source and here this starts to become kind of more natural so kind of usually historically this would be the best result like one block so this is kind of this is like an intermediate between these two kind of cases so this is the result there is only one block and you just assume an entropy of the one block here you assume a little bit of the structure and here you assume that the blocks only one bit and each bit kind of has some kind of entropy so this is kind of something in between and I mean I actually probably don't have time but like I'll just mention it maybe without writing it formally so sometimes the sources come up kind of in real life in cases which you don't really expect so recently I was sorry I'll spend like two minutes on this because it's entertaining so recently like some applied kind of researcher Kolein she came to me like at NVU and she was kind of talking about extracting randomness from Bitcoin so because she kind of says and some people in principle they kind of suggested that you can use Bitcoin as a source of public randomness so intuitively because there is some unpredictability about which transactions we'll kind of verify and the new transactions and also they're like all day in public and assuming that this blockchain kind of converges and so on this is like a good source of public randomness that you can predict in the past and she kind of asked me and some people like suggested a heuristic thing and she asked me is it realistic and I kind of initially probably in practice yes but let's see in theory if it's realistic so we came out with the following kind of obstruction of the source and that was like very generous to the attacker and somewhat surprisingly we realized that you cannot like extract even a single random bit and this kind of pretty mild thing so the obstruction essentially was this withholding attack so essentially you kind of assume that the only thing the attacker can do is so roughly speaking the source has this so there is a completely random kind of thing let's call it the I is that it's completely random it's drawn completely uniformly at random and then this probability essentially one minus gamma similar to Santa Vasirani you know you just set x i equals to whatever x j is equal well I'll say just say this some completely random solution to the puzzle is found and if it's found by a good guy let's say this probability 90% it will be a good guy this solution you know this becomes like the next block the guy publishes the solution and get the reward and so on but it's probably 10% maybe it's like a bad minor found the solution and if he doesn't like it he can pretend that he didn't find it so in this case this probability gamma it goes to some kind of attacker and the attacker can so the attacker if and the attacker if can then decide yes or no and if the answer is yes he will just you know people will see it and if the answer is no people will just work on finding a new kind of solution right so essentially this is kind of a very simple kind of model well and we assume that the solution is really not controlled by the attacker it's like perfectly uniform and so on and and it seems like this is like a very friendly source of money because this is equal to be like you know 100 a thousandth of beats long maybe so and we assume there is like you know at every point of time there is like a thousandth of random beats coming into you so built in 90% that is just so you know output as it is completely random and with probability of 10% the attacker cannot modify them but he can potentially block it and hope that the next block will be good and even for that source turns out kind of after we used the machinery that I'm going to talk for you know for the next kind of two hours we kind of show that even this source is not good enough for cryptography so I'm just saying even like as recent as now some of these things kind of come up some abstraction of those sources and the way it kind of be that I could have thought about like some really specific thing but I try to like really abstract at a high level and look at the first second approximation already at the very first approximation it turned out to be kind of not good enough for what we are going to do so anyway so for now I will really try to concentrate on this kind of weak sources and and let's see even though later sometimes we'll have to make more assumptions and now I'm going to ask the same questions since we do in this kind of table is what can we do and what cannot we do if our secret keys or whatever in general our randomness comes from this weak source right so is it is a goal kind of clear alright so let's try just a little bit just to get you a little bit of a kind of big picture and you know as I said there are many other sources and one thing I do want to say in case we don't have time well actually maybe it's as good time as any I can introduce the next task over here so the next task I'm going to say now that we are talking about this intermediate source this kind of weak randomness there are two ways to go one way is to just directly use it for you know a given application soundness, authentication, privacy and so on and the other way is to say you know what maybe there is a magic function called a randomness extractor such that we have the same perfect randomness we just extract from the same perfect randomness perfect randomness and now we are back to this kind of setting and we are all good right so this is the point of randomness extractor so I'll put the column and of course in the trivial setting this is trivial it's identity function and well if you have no randomness there is no kind of extractor alright so so now we'll try to kind of feel a little bit this kind of column let me get a little bit of water okay so any questions so far? okay would you say randomness? just the key or everything? that's good so for now I'll talk about everything but sorry in general it's everything but for now I'll just say the key so well but for various applications it will change but yeah so but yeah for some of the impossibility but that's actually a great question because you know jumping kind of way way ahead we will kind of see somewhat surprisingly that you know in some cases if you know everything isn't perfect you cannot do let's say privacy but if only the secret key isn't perfect but you have like local randomness for you know just randomizing stuff and so on you can do you can do kind of things that's actually going to be our column number four at some point okay so to give you a little bit of an overview and what is known here and not so with weak randomness there are surprisingly amazing results known so one thing which is known is that you can select you can simulate BPP so I'll remind people what BPP is so the problem is it can be solved in probabilistic polynomial time we can we actually for that problem we don't need very strong randomness we can do it with extremely weak sources so there is like a long long sequence of works which shows us how to simulate the class of problems which can be actually decided in probabilistic polynomial time with weak randomness of course this might not be that amazing because many people really believe that this is equal to the class P so in this sense you know maybe you know the reason weak randomness is enough because you know you just don't need randomness all together but still those results of course they don't you know assuming they're like unconditional so so yeah so so this is like a you know big sequence of celebrated results also we had a paper a while back I'll call it IP block is equal to IP so in this case you kind of said that you know turning to interactive protocols what about interactive protocols do we need really perfect randomness and here it's actually it's a really exciting open question so as far as I know not that many people think about it where the IP weak is equal to IP so if you have a a prover in the verifier and the verifier has access to like you know potentially huge random string and the string you know the only thing you know it has metanthropy and the verifier is trying to catch the cheating proof from cheating and I mean we have some trivial results which I'm not going to tell you the low level details and if you want you can ask me it's just kind of overview of a field but essentially for the interesting setting we don't know any kind of positive implications and in particular I would conjecture that the answer is no that essentially with one block source you probably I'm not exactly sure but you might go all the way to some class like a may or some you know some kind of class you know much under IP so this is actually a great question and if some of you like this kind of a tutorial on I'm more from the complexity to your point of view I would like if somebody solved it but what we did is if you assume blocks or essentially assume that our source of randomness comes in blocks and then we can make like very minimal assumptions longest essentially after like five rounds no matter what I've done before I can ensure that there is like some fresh randomness afterwards then we can actually have some positive results over here let me see if I want to say much more about it yeah so here I guess I'll put a conjecture right here so I guess I'll conjecture that this is impossible even if you have like a lot of entropy let's say 99% of entropy conjecture this is impossible but of course we'll need some complexity assumption to verify so this is a great open question and alright so now let's for the first time talk a little bit about kind of cryptography and these kind of settings so we are moving here so I'm going to erase I guess I have the server board but hopefully people just remember what the definition of mean entropy is and maybe I'll erase this and I'll just write so I'll just remind people that k weak means the only thing you assume that h infinity of x is greater equal k so let's start with the question of authentication and here I'll just start I mean we can do something more general but yes it was like a really nice paper of Maureen Wolfe 97 that essentially it shows that you can do one time mass authentication codes so I'm using abbreviations I mean but I will define everything that you need to define this is just a little bit for the overview so we can do one time x provided that k is at least n over 2 so you can do some non-trivial mass authentication like one time mass authentication code provided that the mean entropy of the secret key for the mass authentication code is at least n over 2 so alternatively sometimes we use the word trade so entropy rate of our source is at least a half so this half looks kind of a little bit strange and maybe somewhat surprising but it turns out I mean we observed it in the paper with Spencer and also later generalized with Vicks addressing your questions about local randomness that actually k greater than n over 2 is so k less than n over 2 is indeed impossible okay so let me give you a very brief kind of flavor of the impossibility results actually the intuition is very simple at least if you don't have like local randomness but it turns out even this like local randomness for mass authentication this doesn't really help so let me give you the intuition so already we see some kind of interesting set of positive and negative results that was a question of one time mass authentication code that is the threshold so I will define one time max but I will just more or less try to just specialize it so let's assume the domain is zero one and we'll assume information theoretic security so in this case roughly speaking the way I can define it is I can actually already define it with this notation so if our secret key is x and the attacker tries to predict tag of s you know of zero given tag of s of one alternatively if the attacker tries to predict tag of one given so so I will say security so I will say security delta is equal to 2 to the minus t if both of these quantities are at least t so now you see we can handily use our conditional mean entropy so roughly speaking it just says that the attacker for this particular case in general we can talk about some attacker adaptively choosing messages and some other stuff but let's just ignore it for now so in this case the definition is really simple if I give the attacker for randomly chosen key s so the key s is sampled from x and x belongs you know from our source so actually I guess I should use x notation x let's use a notation x instead of s so so yeah in this case we just you know so in this case what's the message authentication code you know the secret key on one bit the secret key just comes from the source and you can either output the tag of zero or tag of one and if the attacker sees one he cannot predict the other so now it is just kind of as a warm-up if x is truly uniform what is a very simple one bit you know message indication code for one bit can somebody suggest you know it turns out to be actually optimal but can somebody suggest a very good way to authenticate one bit message if I have a secret key let's say 100 bit secret key which is truly random what can I do so what should I do if I want to authenticate zero what is the tag of zero right so I have 100 bits that are perfectly random for now I'm just kind of just to make sure the first 50 bits will be tag of zero the next the second 50 bits a tag of one and turns out this is actually optimal let's see security 2 to the minus 50 tag of zero and tag of one are completely unrelated right and so therefore you know because each of them are 50 bits long you cannot do much and notice this kind of starts to a little bit explain where this n over 2 is here so the main entropy of the key is only 50 bits and the key is 100 bits if you look at this very particular tag of course what would be a very bad situation where the scheme will not be secure right so remember our scheme so this is like a side just to get so aside I assume x is equal x zero x one and tag x of b is x sub b and assume h infinity of x is equal to n over 2 oh sorry n in all of the notations I forgot I should have defined it n is in this case always a length of the secret key I guess I define it over here when I talk about distributions I'll just assume that our randomness comes from I just gave it a letter zero one to the n and in this case for now you know answering Paul's question is we'll just assume that all the randomness is used for the secret key because we look at very simple kind of schemes right so okay so so this is like a very simple message indication scheme so why if the main entropy of x is only known to be n over 2 I claim that the scheme is not secure right what would be a very simple attack so in this case let me just clarify what does that mean just so that we have the rules of the game so the reason I differentiate between source and distributions because I kind of assume that if I tolerate a source it means I want to find one scheme that works for any distribution of the source and if you want to attack my scheme you're free to choose your favorite distribution as long as it is in the source so can somebody specify a very simple distribution on x0 and x1 which will have mean entropy n over 2 you cannot predict this probability more than 2 to the n over 2 but where this message indication code is insecure right so for example you can set x to be so this notation like 0 n over 2 can continue it so this notation u u sub you know some number it means uniform distribution on this many bits that's another notation so in this case if x is this distribution you can just completely tag of x of 0 is just equal to 0 to the n over 2 right so this will be completely insecure right just to get you I know it's a very simple example but just to get you some kind of intuition of the games that we are playing right so the games that we are playing at least for now we would like to read this big source so this is the only thing we know about the source is the mean entropy of the source and we would like to design a message indication code for just one bit so this is a definition I don't know you see very simple definition no complicated things and we kind of saw that at least the naive scheme doesn't work but maybe there is another scheme that works so here is a cute proof that no scheme works I guess I cannot raise this thing up so alright I'll draw it then then maybe we'll use the board for something else so so here is a very simple proof so imagine so this L is all tags of 0 and R all tags of 1 and imagine that for any particular secret key x I will draw an edge from tag sub x of 0 to tag sorry, tag sub x of 1 so roughly speaking I draw the graph I don't know what the domain is but you know what I know is there are two to the n edges here alright so for every particular secret key we'll correspond to an edge connecting tag of 0 to tag of 1 and we assume this is deterministic but as I said in the follow up work we extended it to probabilistic ones just to give you the idea of how a general impossibility maybe I'll move it a little bit over here how a general impossibility result would work ok so so what is the game we have to come up with a you know so let's call this capital N so in entropy N over 2 intuitively means that I have to come up with a distribution on edges such that you cannot guess my edge with probability better than 1 over square root of N right so so right so this is h infinity of x equal N over 2 or greater equal N over 2 it means that predictability of x is less than 1 over square root of capital N alright so um so somebody we want to argue that for any graph like that I can come up on distribution on edges such that either if I give you tag of 0 it's easier to get tag of 1 in case it will be easiest probability 1 or the other way around ok so here is what we are going to do we will just kind of say listen there are two cases so let's look at case one case one either exist alpha belonging to the left side such that degree of alpha is uh let me see is it less uh let me just see such a degree of alpha I guess it's less uh than square root of N alright so let's see so oh no greater greater sorry yeah uh there exist things such that this is greater so I assume there exist a node let's call it alpha such that there are more than square root of N edges coming up here so I claim in this case I'm pretty good to go what should I do in this case to forge a tag of 0 so these are tags of 0 so I claim if there exist a node such that there are more than square root of N edges coming out of here I can come up with very simple distribution such that it's very easy to predict tag of 0 what would be the distribution uniform distribution of edges so again just just so that we kind of understand the kind of things I mean usually will I mean there will be more impossibility results coming uh coming up but in this case you know as I said so to argue that you cannot have any one time back when the security is less when the min entropy is less than little n over 2 um all I need to do is like for any scheme and I conveniently present the scheme as a graph for any scheme like that I have to come up with a distribution I look at this graph and I come up with a distribution where it's easy to forge a map and I'm looking at the case I said listen if there exists a guy on the left with very high degree it's very easy I'll take uniform distribution here so let's make x to be uniform here and if x is uniform distribution here what do I know about tag of 0 it's equal to alpha right so in this case even without seeing tag of 1 tag x of 0 is equal to alpha so I forge a map with probability 1 right well what if we are not lucky maybe the attacker is not going to be stupid enough or not the attacker in this case it's a good guy the designer of the scheme uh right maybe he'll come up with a graph where all the nodes on the left have low degree right so this is the second case so in this case here is what we are going to do so if the nodes all nodes have low degree I essentially I will do uh the following thing I claim that in this case I can find a matching of size at least square root of n so if all the nodes have low degree so here is what I take like the very first node it has less than square root of n edges so I'll just choose an arbitrary edge okay and I delete all the other edges so I picked one edge and deleted square root of n edges but there are n total edges so I will take another node and again I'll choose one arbitrary edge and delete everything else so roughly speaking the procedure will be take a node pick arbitrary edge and delete the rest so and so on pick a node take one edge and delete the rest and so on so how many times can I do this procedure so I have n total edges and every time I do it I delete at most square root of n edges square root of n minus one but right so how many times can I do it so there are n total edges and every time I do it I delete square root of so it means I can do it at least square root of n times right so what does it mean it means that I have square root of n distinct guys over here right it's not necessarily matching yeah it's not necessarily matching it's like it's good enough for me right so in this case what is going to be my distribution it's going to be uniform on this at least square root of n edges again it has me an entropy at least n over two because predictability is one of square root of n but now what do I do in this case I claim that I will ask what is tag of zero I will get one of these nodes but each of these nodes has a unique guy remaining right because I killed all the other guys coming of this node so I once I know tag of zero I know for sure tag of one I get this guy I know it's here I get this guy I know it's here and so on right so once again so in this case given tag sub x of zero no tag sub x of one yeah well so this is case two sorry so I didn't write so case one was there was one guy with a lot of nodes then I took uniform distribution here if this is a case I'm done otherwise all the guys have low degree and I find this matching or close enough to matching and I give another distribution so it's just one of the cases has to happen and depending on the case I will give you a different distribution on contradicting security on Mac in one case the distribution allows me for sure predict tag of zero in the other case given tag of zero it for sure predicts tag of one so this is kind of a cute little proof it kind of shows that this n over two is not like a random number you know there is some kind of reason for it but it came from this kind of thing so what about positive results so this is actually let's start to do something kind of positive because there will be a lot of more negative results coming so for positive result I don't want to say much I just kind of for those who haven't seen I'll I guess hopefully there will be enough board space but all right I'll I don't need this so here there will be actually something that will be super useful it's so the original paper of this modern wall what we did they kind of took like a standard message indication code and had a direct proof that this code also works as long as I mean entropy is at least 10 over 2 and get some bound and what we observed later in we pushed it quite far we made a really general observation which really says that for application like authentication you can actually very often translate security with uniform key to a weaker but still not trivial security with weak keys right so this is kind of a very general lemma that I kind of want to mention and so the lemma is the following so for any x such that h infinity of x is greater than n minus d well this is equal to k it's just more convenient for me to talk about the entropy deficiency in this case as opposed to the entropy and for any function f to non-negative numbers so for any real valued function which is non-negative I claim that the expected value of f of x is at most 2 to the d times the expected value of f of uniform distribution so essentially the question I'm asking is listen I have some non-negative function f and I want to look at its expected value with respect to uniform distribution that's a good case scenario it's like if I have like a uniform kind of secret key or something like that and on the other hand maybe I don't have a uniform key I have one of those weak keys which have d bits of entropy deficiency you want to see how far does the expected value you know how much worse could it become and the you know not so surprising but like very easy answer it can increase but it's more the factor of 2 to the d so let me see the proof of this result so the proof is really very simple expected value of f of x is sum over little x of f of x times probability x is equal to x and because the function is non-negative I can just you know each of those probabilities at most you know upper bounded by predictability so this is at most predictability of x times sum of f of x okay and just to be difficult I'll multiply and divide both parts by 2 to the n so here I'll just write f of x times 1 over 2 to the n right so and what is predictability of x so this is at most so this term is at most 2 to the d why well because the mean entropy is n minus d it means that no element having this probability more than 2 to the d divided by 2 to the n so I multiply by 2 to the n it cancels and what about this term what is this well let's just match it up so I have 2 to the d so it must be the expected value of f of x under uniform distribution why well because that's what uniform distribution is and in defined it's like every element having this probability 2 to the minus n so this is 2 to the d times 2 to the d time expected value of f of x all right so that's a cute one line kind of thing but oh sorry oh yeah f of u sorry yeah well it's what I wrote is also true but yes so f of u m yes so all right so this was like a really simple proof but surprisingly I mean it has powerful implications and not just to this thing but to many other cases so in particular let's apply to authentication applications so maybe I'll continuously use that board and answer so if you want to apply to authentication applications what do I do all right let's just think you know abstractly what is an authentication application so it means I sample a secret key from you know so I play some kind of game I sample a secret key from my distribution and then the attacker he can like find out something and at the end he has to forge something he has to like compute you know predict tag of zero for just signature or something like that so in this case he will have to you know forge a message not necessarily zero one here we are talking about general master syndication codes so roughly speaking and you know what is it measured you know how do we measure his success probability well we just literally compute the success probability of that attacker right so roughly speaking we can let f of x to be probability a forges or whatever succeeds in outputting something it's probability a forges succeeds on secret key equals x right so I can just define this function f of x to be the probabilities attacker succeeds right this is my function f of x and because it's a probability it is indeed non-negative right so the conditions of this lemma are satisfied right so what does it mean it means that whenever whatever is the probability so this is what we want to upper bound we want to say the attacker even though the key comes from a weak source the attacker will not succeed with high probability and what we argued we said this probability is at most two to the entropy deficiency times probability the attacker succeeds on uniform distribution right but what do we know about the attacker succeeding on uniform distribution well we know we compress the whole theory of master syndication codes to a check mark we know that he cannot do it right we'll we'll use something good for which the attacker cannot actually succeed very well so we will know that the attacker cannot actually succeed we either prove or assume that he cannot succeed on uniform distribution therefore we have like a particular you know kind of penalty which is this two to the d so what is the corollary so what corollary is if how they want to call application if p is let's just call delta secure authenticate so here you know authentication application about some class of attackers authentication application in ideal model in ideal randomness model then p is two to the n minus k time delta secure in k week model so essentially whatever is the security we have with respect to uniform distribution again is whatever class of attackers so here maybe we can talk about computation and bounded attackers but even if you talk about signature scheme we don't really care about we're saying whatever you know unfortunately I raise this derivation but this derivation was just like you know computation of expected value it didn't care about computation efficiency or anything like that it just says whatever class of attackers we have security in the ideal model we have at most two to the n minus k some delta security in the in the kind of real model right well of course that looks actually pretty bad this two to the n minus k and indeed in some cases it could be too bad it depends on this thing but for master's indication code it's not too bad so for max just I mean I don't want to spend too much time but just to give you an idea if in case you haven't seen it for max so here is a classical one time mac so classical one time mac is essentially x you write it as a concatenation b both of them are n over two bits long so a b is zero one to the n over two and here for this for convenience we'll view it as a finite field of size two to the n over two we'll just identify this domain with this finite field and we'll also assume that the message also belongs to this Galois field of size two to the n over two and the classical tag a b of m is simply a m plus b so this is like one of those classical things that you learn in one of the first cryptographic classes how to do a one-time master's indication so this function is pairwise independent so more generally you can apply any pairwise in what we call if you haven't seen it don't worry about it i'll so this is intuition so intuition of course you can make a formal proof it's not very hard intuition is for any value of alpha which would be tag of a b of m or and so for any alpha and for any alpha prime and for any m not equal nm prime there exists unique a b such that alpha is equal a m plus b and alpha prime is equal a m prime plus b right so this is like this what pairwise independence means it just means so in particular it just means even if i tell you alpha you don't know anything about alpha prime because for any possible alpha prime there exists a unique explanation of a b so intuitively we just didn't learn anything or more specifically it just means you know speaking in more kind of language it means that for any m and prime essentially tag of x of m prime is uniform is actually equivalence of three equal signs it's a uniform and n over two bit strings conditioned on tag of x of m so anyway if you haven't seen it this is just kind of two so this is like a classical one-time mac and the question is what security does it achieve so in the security it achieves it's essentially a generalization of this splitting the thing into two it essentially says that if i tell you tag of x of m for any other different machine prime is just a uniform string element of the field so you cannot predict this probability more than two to the minus n over two all right so if you haven't seen if you have seen it great if you haven't seen it's just the only thing you need to know that it's a very simple construction with a like one line proof which achieves security two to the minus n over two so let's just apply it here right so if you apply it here we get that the new security delta prime is two to the minus n over two times two to the n minus k and if you simplify it it becomes what two to the n over two minus k okay okay this is me so i just plugged it in so what does it mean remember what did i try to tell you i tried to tell you that the small as long as k is greater than n over two we have some non-trivial security and indeed this is what we got over here as long as k is a little bit more than n over two you start to get a little bit security right in this case and the way we achieve that i mean unlike like maura and wolf i mean who kind of had a specific proof which was short it wasn't like a big deal but it's really part of this general kind of methodology is that for authentication applications you generically get essentially at least some kind of security with v keys at least when the entropy deficiency is small so it just kind of says that security cannot suddenly disappear if you go from uniform keys to slightly uniform keys it like decreases fast exponentially but it doesn't go in one shot and we will contrast it with like things like privacy momentarily where we will see that there kind of the moment you can lose one bit you know of security in the key of entropy in the key the security can completely disappear so at least authentication application this is not the case so let me try to write something in this column just to fill it in so authentication so something is possible at least i'm talking about one if k if and only if k is great or equal n over two at least for one time max i'll write otm to stand for one time max so at least some you know something is possible non trivial something very good unfortunately is impossible and there is not that much work for the things we did have one of the earlier papers which kind of showed you know and we can also do like signatures for any k well for any k using some exponential assumptions so here i want to to spend too much time on it i want to move a little bit on privacy but just to give you some overview of authentication the common wisdom is yeah for information theoretics i think we can kind of characterize what is possible at least like for one time max it's like the tight bound as n over two we can talk about like multi-time max you know and essentially the more demanding is the mac at least information theoretically the more demanding the requirement on the entropy is but under computational assumptions if you start to believe that for examples are like exponentially hard one-way functions and so on there are some words which you know given it's my work i can say i wouldn't call them super elegant but you know under appropriately strong assumptions you can do stuff so essentially the common belief is for authentication applications it's kind of if you assume something strong enough it is okay you can actually do them with weak randomness so you can achieve this desirable goal with this and one way to kind of do it in practice for those who know about cryptographic hash functions you'll just take your key you just plug it into sha3 before it was sha2 now it's let's say sha3 or something like that you just plug it you plug it there and intuitively for practical reasons it will be kind of good enough as long as the key was unpredictable beforehand people you know there will be not enough weaknesses in sha something will be scrambled enough so by and large i'll write it on top so by and large there is some kind of check marks with caveats so yeah so we have there are only two papers which consider them and the reasons there are not more it's like it seems yeah it's like given the sum of those impossibilities it's it's clear that you need to start making assumptions and right now the only kind of assumptions under which we can do something is some kind of yeah like exponentially securing one-way permutations and there is more caveats we it we cannot actually do it for weak so you know for just a source we also do it for only block source and it's kind of ugly so so yeah feel free to look into it and maybe clean it up or something so yeah like for for example one time x it should be entirely doable i don't know if many people looked into it so but yeah it should be entirely doable to do k less than n over two under hopefully like same reasonable kind of assumption some kind of exponential hardness or something like that people haven't done it because i guess there was more fun in this corner so so it's about well so that's we cannot say it's about it's like you know you say if something is it depends on the exponential security we assume from like our one-way permutation and to be honest it's kind of tough because i want again but you know we don't have that many candidates for one-way permutations there's some kind of discrete log and so it's a much less elegant than what i'm going to talk about next so maybe oh yeah question no i mean if k equals n over two this impossibility result that i i guess erased by now it's yeah yeah so it's i guess yeah it's uh well yeah yeah no so technically yeah so you need and that's actually tight even that you need this yeah i mean i mean i'm just kind of giving you qualitative kind of things and okay so for privacy so maybe we'll start privacy after the break i just want to kind of set up set it up a little bit just to give you some kind of examples so one thing which is kind of bad here you know if you look at this kind of degradation so yeah just because we move for privacy let's i guess i can start erasing these things so let's look at one time pad right so that's like a conventional very nice privacy application so we are moving to privacy and let's start with one time pad so what is the one time pad encryption is encryption of x of m is x plus m i mean it could be one bit and could be multiple bits but yeah let's say from the hits n bits so x and m are m bits sorry n bits long okay so now this scheme is perfectly secure if x is is drawn from uniform distribution the scheme is perfectly secure because intuitively for any m0 not equal of just for any message m essentially m plus uniform distribution is equal in the distributional sense it's still a uniform distribution right so if you take a message and it's you know shifted by a random amount you just get random amount so obviously you get no information about the message so now kind of as a simple kind of warm up question what if h in what is the only thing you know that the main entropy of x is n minus one is one time pad a secure scheme right so we have a classical like you know secure scheme with uniform randomness and want to see maybe it's kind of trivially so maybe this degradation that i just erased still kind of works here and okay so any any takers it should be pretty simple oh sorry no okay so yeah so essentially listen set x to be uniform n minus one can continuation with zero right so that's a distribution of main entropy n minus one so very high mean entropy but now if you do encryption of m1 mn so what is this is going to be some noise but what is the last bit concatenated with mn right because this is essentially okay let's just write it explicitly it's m1 mn minus one zero plus you know key concatenation zero so this is like something concatenated with sense of n so essentially i'm just giving you the last bit of the message in the clear right so this is i mean of course you can say oh it's like artificial but remember this is a game we play if you don't if you say it's artificial of course then we'll have to make some more assumptions beyond mean entropy and ideally we would like to just kind of base everything on the only on mean entropy but you know of course here you can say what about like Santa was irani well even if Santa was irani just if the last bit is a little bit biased i'll make it 51% bias now you can guess a bit of the message probability 51% and for privacy application this is actually bad like even 1% break of privacy is considered bad we want like negligible breaks of privacy so so here we have this kind of very sharp kind of drop that like authentication you have relatively smooth thing you know we start with good enough security it kind of degrades very slowly and it's slow enough to get semi interesting well actually not semi but you know somewhat interesting results with week randomness and yeah you know pointed to the paper we have like more you know somewhat more you know more surprising even implication of this trivial one way you know one one line lemma and you know that you know you don't have to even worry about application just translate from uniform to uniform and here is just you lose one bit of entropy and somehow at least this particular scheme kind of completely drops and of course the question is just maybe we can do a smarter one-time pad maybe we can like fix the one-time pad scheme and some who make it secure or maybe even more ambitious maybe we can design a randomness extractor for the weak source over here and get some kind of results and unfortunately what we are going to spend I'll give you a little bit of a kind of two-minute preview and then I guess we'll take the break for the second kind of part where we'll start getting a lot of those kind of impossibility results so essentially it will turn out that the answer I'll maybe pre-fill the table but the answer will be largely and in a very strong sense so it will be a largely let's not waste time largely there will be a pretty big x over here and this x will come up with like pretty you know multiple sense it will be pretty strong not only we would be not be able to tolerate like weak sources even like think like Santa Vaserani sources so a lot of like very restricted sources already privacy some who appears to be more demanding and in fact we will have you know one of my favorite kind of results of mine in this area like really that essentially is the only way to more or less achieve you know good privacy you know to to building you know a good enough encryption from a source doesn't necessarily have to be weak source is necessarily through randomness extraction so essentially you know if you can do something at all you know then you can do it through just essentially extracting a uniform thing and then using one time pad and there is even a funny story about it which you know I'll conclude before going to the break so we had like two papers on the subject so the first paper that I wrote in 2002 with Joel Spencer the title of the paper was on the non-university of the one time pad we kind of asked this question we said you know what there is hope that we don't need to go through randomness extraction I gave something like pretty artificial and I'll tell you exactly what it was after the break but essentially kind of I gave you I gave some you know we build some artificial source where you can do one bit encryption you can do some kind of you know perfectly in crib one one one bit but you cannot extract one bit so at least you know the other source where you can encrypt one bit but not extract one bit and then we called it on the non-university of one time pad very geeky title but with a point it's like there is hope in the world maybe we can build cryptography on non-extractable sources and then we wrote a follow-up paper which I had a temptation to call it on the university of the one time pad but then it would be making fun of me so I decided not to do it and it was called something like does privacy require to randomness and unfortunately it kind of showed that you know modular very small slack essentially if you have a source of randomness good for encryption it must be kind of extractable so there would be a pretty big kind of facts and we're actually going to see all those proofs and they actually surprisingly simple it I mean they were not maybe simple when they just came up with it but in the upcoming years when I told the stuff and simplified the stuff we came up with a lot of simplification so now all those things are you know pretty elementary so hopefully you will see those kind of proofs but what I'm going to start the second third of this kind of tutorial is with a general framework to show impossibility of privacy under some kind of source L and this is a culmination of several works of ours but really the latest one was just from crypto last year in 2015 I had a paper with a visiting student young queen yeah where it kind of cleaned up as those previous papers and really gave something like very simple in general and hopefully you will appreciate it which kind of is also explained to you why privacy is so much more demanding on randomness so this is what it was so just to give you you know the plan for the next hour and a half so we will largely be kind of concentrating in this in these two lines and then transition to the last column for the third one you know for the third part of this tutorial and now I'll take questions before the break so also let me know I mean feel free to come to me in the break or something but well we'll see maybe I'll write a slide over there to save the board but if I am a little bit too slow I'm totally happy to speed up I just wasn't sure and you know it seems like a good kind of transition to get people warmed up about the topic but yeah I can definitely go faster if people ask me to oh sorry well that wasn't the goal of trying to avoid to extract but yeah in some sense well you can say the goal is through going through extractors because well if extractors were possible and it was like very cheap of course we can always just go through extractors but you know from a philosophical kind of perspective you know it would be nice if we can actually if encryption didn't require extraction so from this perspective you can say yes I was I'm trying to kind of avoid extractors so in this sense it's it could be one goal but of course our goal was to figure out whatever the truth is but what is going to be very interesting if you compare the second third with the last third of the talk so when we don't have any kind of local randomness where everything is just kind of in the secret key yeah roughly speaking we will see that kind of privacy really requires extraction there is kind of no difference when you move to the last column as we will see extraction you know I'm just stealing my own thunder extraction kind of becomes possible and there again we you know the common wisdom was that essentially to deal with something you can just like extract from a bad key extract a good key and then do conventional crypto and there surprisingly we showed that this is not the case we can actually do much better and very practical applied sense we can actually in this setting we can actually derive better keys which are not uniform but yet are very good keys for like privacy and authentication so that will be like in the third third just to give you the preview of what we are going to do yeah right so that's a very good question so so couple of answers yes so of course it's not like I really want I mean I happen to like information theoretic security is like elegant and so on but of course it's like the strongest kind of thing if you get a positive result it means it supplies to anything so we don't have to worry about you know if the result is like very strong why settle for computational assumption if somebody like breaks factoring and so on it says sure so yeah we have a positive result of course it's great for a negative result I agree you know it's you can view that if information theoretic kind of stuff is impossible you know go for computational but here I want to stress that I'm not talking about so here the goal the power of the attacker is to mess with the source your source of randomness and the schemes all our results apply equally to information theoretical computational schemes well not our result the majority of our results so here really it's not about information theoretic security we it is just a question is what assumption about the source do we need to make so that we can tolerate this source and here you could validly criticize some of the results not the ones that I'm going to talk about in this thing but some of the earlier results that we improved that there's attackers here they broke the security of things but like this thing with the graph I mean you know if the graph is exponential size maybe it's hard for me to find an obvious high degree or to compute this matching I mean those could be hard problems or to come up with this distribution maybe it's not efficiently sampleable and so that's a valid criticism and that's why in some of the follow up results we kind of most of the impossibility results usually with a little bit of effort sometimes no trivial effort they extend even to efficiently sampleable kind of attackers if you restrict the distribution to be efficiently sampleable sometimes it's very technical I might mention that if people are interested so I'm not going to care about efficient sampleability but in terms of computational versus information theoretic security these results they don't really care about it so in some sense positive results work for everything negative results could be extended to at least efficiently sample distribution but in terms of applications whether it's signature or mac they apply for everything the only exception where I mean I assume Leo will talk about it much more computational security we could derive better parameters for using computational extractors where the output is not going to be perfectly statistically close to random but it's going to be pseudo random but even there surprisingly I assume Kugy will talk about it essentially the difference only starts to make sense after the initial level it's like you know I don't know you have to be I guess over 18 to drive and then maybe the older you become you start to drive like really better or something like that if I don't know whatever it is but some who the minimal amount of entropy from which we can extract there is still a little bit of gap between information theoretic and computational security there is a great question that maybe Leo is going to talk about tomorrow about like whether computational extractors have better what we call start up in entropy what is the minimal amount of entropy I need to start doing cryptography but in terms of the startup amount the differences like appears to be very small is just you know getting more and more as well exists to the random generators and so on by your stuff but so far they seem to be very little gain in computational security in this kind of area and hopefully they'll become clear but this is a great question so I definitely very much encourage you guys asking me philosophical questions because I hope this is you know at least for the first day this is like the right setting for this and I'll try to do my best to answer those all right so we'll resume