 Thank you very much, and I thank the organizers for the putting on this conference This is my first crypto conference, and I really enjoy it and I'm learning quite a bit here so this is joint work with Ben Fortescue and and Minshew shea Ben is a post-doc of mine at Southern Illinois University Carbondale and Minshew is Probably somewhere here in the audience or oh thought maybe you went to the parallel session, but okay good Okay, so this is essentially a talk on secret key distillation And so the setting we have in mind here is that we have three parties Alice Bob and Eve and They have access to some source and this source is spinning out X Y and Z letters to Alice Bob received Y and Eve received Z and it's an IID source so they they have witnessed n copies of this and The goal is for Alice and Bob to convert their stream of X and Y's into essentially Perfectly shared randomness that is secret from Eve Right, so if they have access to n copies here, they they want to produce m perfectly shared bits and At the end of the day they want this to be essentially factored from Eve So Eve has no information here, and then the rate is is the ratio m over n the amount of Bit of key they get per copy of the source Okay, we want to consider this scenario now when Alice and Bob have access to some access to some additional resources So there's a few scenarios. We're going to consider The first is the most restrictive is when they're allowed no public communication So here just in this diagram. Here's the source It Alice receives X n and she has performed some processing here to to obtain variable X hat Bob has Y hat Eve is just sitting along She obtains Z n and then at the end they have generated this distribution P X hat Y hat Z n and of course they want it close to the target distribution Okay, so like I said, this is the most restrictive scenario We can give them a little bit potentially a little bit more power In the form of common randomness, so we assume that in addition to the source there is this other shared randomness that's uncorrelated from the source and Alice Bob and Eve receive W and so then Alice is processing here as a function of X n and and W as well as Bob's so then likewise they obtain this and they want to Finally obtain key at the end Another scenario is one-way public communication So here in this situation Alice is able to communicate to Bob over a channel that Eve is eavesdropping and This scenario actually is It's more powerful than this scenario. Why is that because of course? This can be treated as as the common randomness Because Eve has access to it easy eavesdropping Okay, and importantly then at the end when when they The distribution should be independent of M itself, right? So we now basically because Eve learns M then they must The key is private from E from M as well Okay, and finally we allow for two-way communication so interactive communication between Alice and Bob Okay, so this is these are the four scenarios where where Eve is um nefarious and Then we can consider one where she's actually helping and In this scenario Eve is being very nice because not only is she going to communicate to Alice and Bob But she's gonna sacrifice any correlation. She has whatsoever and she says all right fine I'm gonna help you and also I want you to obtain key. That's secret from me at the end So this is the fifth scenario we call the helper scenario This is also known as private key because the key at the end of the day is private from Eve Okay, so let's just quickly review some known results on the capacity here So the first with when there's no communication and no common randomness This this is a formula given in this paper by Cesar Narayan in 2000 and actually they don't they consider it they don't consider in this particular form This is implicit in one of their results and you see that it's given as a maximization over Auxiliary variables that satisfy these Markov chain conditions The second scenario where there's common randomness This is actually not been studied to to our knowledge So one of our results is that we're able to compute this here and then the third scenario which is probably well known to most of people in the audience here is the the one-way public communication rate and if this was given by Ashwit and Cesar in 1993 And if we compare it to one in three here, you see that they look very similar except in three We no longer have this this constraint on on the auxiliary variables So this is essentially the constraint that imposes the no communication The the two-way public communication. So notoriously. There's no capacity formula for this quantity here on there are a few upper bounds and probably the most well known is something known as the intrinsic information and This is given by the conditional mutual information that is maximized over all variables Z bar that are generated by by Z. So we think of Eve. She has her variable Z She sends it through a channel and then we we look at the conditional mutual information after that stun and we Sorry, this should be a minimize. Sorry there We want to minimize overall all channels Okay, this is known as the intrinsic information and it's an upper bound on the key rate Okay, and finally then in the helper scenarios the private key scenario this is known exactly and This is the conditional mutual information. So in particular this this gives An operational interpretation to the conditional mutual information So if you want to say well, what is this really physically correspond to one answer? You can give is that it's it's like the helper scenario here. It's it's the key rate All right, so the question then that that we study is When is it possible to attain the conditional mutual information under the various scenarios that I discussed and Given the the capacity formula of the fifth one there on the last slide This is equivalent to asking when does an assisting Eve offer no advantage under the various scenarios? Okay so we begin with the Easiest scenario to analyze and this is with Common randomness with no communication, but we all we do allow some shared randomness here And so when we studied this a key tool that we used is something known as the common information so this was introduced originally by a goxon corner in 1973 and If you're not familiar with it. It actually is it has very nice Intuitive characterization So first is the idea of a common function So some variable j is a common function x and y if you can compute it from x and y independently So another way of saying that is if we take our distribution p x y and we decompose it in this way so j is is now running over this this variable and The value of j can be determined both by Alice and y just by see Alice and Bob just by looking at their at their Variable that's essentially what this is saying. So it's disjoint for different values of y and j and So then the common information as defined by by goxon corner is The the the variable j at j x y is the common function of largest entropy Okay, so this is the this is the value This is a variable that both Alice and Bob can compute and it has the maximum entropy of all variables that they can compute Okay, and it is unique up to relabeling That's a nice property of this like I said, this has a nice intuitive characterization and I think Probably the easiest way to see this is in terms of a picture here so we'll look at a few of these diagrams throughout the talk and This is just a distribution x and y and The way to interpret this is wherever you see x's that those are events that happen with some non-zero probability all the other dots are Events with zero probability Okay, so you the actual numbers don't matter here. What matters is what is non-zero and what is what is zero probability? so then we can we can block it off like this and So this is j x y then it has three values in this the common information and The idea is just the following So when Alice for instance looks at her variable and she sees that she has four She knows that Bob's variables lie between zero and four. So she knows that they lie in this block There's no possibility for anything down here So likewise if Bob sees that he has six he knows that Alice must be between five and six So we can circle off that this this region here Okay, so like I said, it's nice nice to easy to depict So then our our one of our first results is that we are able to compute the the capacity for No common randomness and we show that it's actually equal to shared condom common randomness So there's no advantage provided for in that scenario where they have access to some some side-shared randomness and Actually, it's given. This is nice Formula here. It's the conditional common information so it's j x y conditioned on z and This is nice for two reasons one is that it simplifies and in improves the Cesar and Ariane result that I cited earlier. So remember they they had this Maximization over auxiliary variables and actually we can just show that you don't need to do any maximization It's just given by this and also we are able to consider the common randomness Which isn't explicitly done in in this formula here Okay, so Then the the question though that I that I asked in the very beginning is we want to determine what distributions are able to obtain this key rate of the conditional mutual information so Given this here. We basically ask when is this quantity equal to I x y given z Okay, and so we were able to solve this and we identify a specific class of distributions And so I'll I'll describe this a bit it gets a little technical, but it's it's it's still fairly intuitive And so the idea is now we introduce a new quantity known as the conditional common information which we denote in this way and So the way to think of this this j x y given z is now now we have a third variable z And so we look at the different conditional distributions So here's when z equals zero we draw our plane when z equals one We draw our plane and then we compute the common information for each of these planes. So these are the conditional distributions and I can compute it. So when z equals zero we have these these two blocks when z equals one we get get these three blocks here Okay, and so you can see that if you just look at this particular example If we look at the the 1 1 events so in the different planes here There's a the situation is that Alice and Bob are not able to determine Whether the event zero zero occurs in the same block unless they know the value of z Okay, I'll say that again. So if for instance when z equals zero they know that that's Zero zero is in the same block is one one right when z equals one. They're in different blocks But if they don't know what z is they don't know whether this event is in the same block as that event Okay, and this is in this is important So this is some sort of information that they they don't have access to that Eve has access to because it's conditioned on z So we wanted to try to capture this and that's what with this this class of distributions that we call uniform block independence So we say that a distribution is uniform block independent if Alice and Bob can compute Jxy given z for all values of z More formally we can we can write it as this is that this variable is a function of of the common information And also there's another condition is that within each block Alice and Bob are uncorrelated So what do I mean by block? I mean if we if we look in here, and we just look at the distribution here We want that to be uncorrelated. We don't want Alice and Bob to share any correlations except the block number and in terms of a decomposition it means that we can take a take our distribution Px y given z and Conditioned on z and j x and y become independent okay, so these are the two conditions and Maybe it's easy to just look at some examples to understand what this is so the first is this is not uniform block independent and the reason it's not is again because it has this property that If we look at here and here the event 1 1 isn't the same block as 0 0 in both of these But over here, it's not it's in a different block So Alice and Bob aren't able to determine the value of j x y given z simply by looking at at their variables Okay, here's another example. This is not uniform block independent here. They are able to determine the block number But the problem is over here this one within this block here. They share some additional correlations So they're not uncorrelated within this block here. So this is not uniform block independent But an example of a distribution that is is This nice one here because they're able to determine their block number and also within each block. They share no additional correlations Okay, so It turns out that this the distributions that have this property are exactly those that that give us this this nice Key capacity result that it's equal to the conditional mutual information Okay, so that is the the no communication scenario Now we move on to look at the the one-way public communication scenario So again, we'll just be considering scenarios where Alice goes to Bob But you just switch variables if you want to go from Bob to Alice so recall the the capacity formula Which is the single letter characterization here and so what you can do is you just play around with the chain rule a little bit and massage some variables and You can write this the right-hand side here in terms of the the conditional mutual information And what we you see is that you have these these three terms here, which are non-negative So in order for this to be in a quality we need those three terms to vanish Okay, so then so we have that this is equal to this if and only if there exist variables You and V that satisfy this Markov chain condition and also Vanish such that these these three information terms vanish Okay, so when can we find distributions that have that? Well, we weren't able to completely give a nice clean answer to that result but we are able to give a somewhat clean answer in terms of a strong necessary condition and So a distribution will have that the capacity is the conditional mutual information if We are able to find If there's any events that Has some non-zero probability for Alice and non-zero probability for why in different blocks then that the Probability of the joint event must be zero and if that doesn't hold true then the capacity is not equal to the Conditional mutual information Okay, so again, I mean it's probably easier just to see this by an example And and so here if we look at this distribution Okay, so first of we go to the z equals one plane here and we have these two nice blocks here So you see that in the z equals one plane There's Alice has some probability or excuse me why a Bob has some probability of one Alice has some probability of one as well So what that says the theorem says then is that this event one one should be zero, right? That's what that component says there However, if we go to another plane over here where z equals zero we see that that there is a non-zero probability of the one one event there so what our theorem tells us then is that because There's this hole does not exist in every single plane every single conditional distribution This tells us that actually we cannot attain the the upper bound of the key rate Okay, so this theorem as like I said, this is a necessary condition It's also sufficient if the Alice and Bob there for their support or That's rather their range is the same for each Conditional distribution, so the only time that this this theorem doesn't quite give us a sufficient condition as well Is when you have distributions where when z is one value Alice's range is something different than when it is another value and vice versa Okay so so moving along on to the the two-way public communication and It's it was probably a bit Too much to expect that we could answer the result completely given the fact that we don't know the capacity for two-way secret key distillation Nevertheless, we are able to get a nice necessary condition and it's based off the intrinsic information So again, this this should be minimized here. I just copied it incorrectly from the first slide And the idea is is the following okay? So we know that the intrinsic information upper bounds the two-way key rates and in particular An upper bound for the intrinsic information is the conditional mutual information Okay, so then the idea will be that This is inequality only if if if there doesn't exist a channel that for Eve for which We're able to decrease the conditional mutual information So if we're able to construct a channel for Eve such that this holds true then then clearly this is not equal given this chain of inequalities here So that was that was the idea that we wanted to try to exploit and we wanted to try to construct channels for Eve That that decreased the conditional mutual information So the question we asked is what sort of distributions allow us to do this? okay, and so we are able to identify quite a large class actually and I won't go into too many details here. These are all given in the paper but basically we're able to to show we introduce this this relationship between two distributions that that we identify we label by this this Arrow here this dark arrow and it satisfies some some properties and It turns out that if if you look at any two conditions if you have a distribution PX YZ and if you look at any two of the conditional distributions and they they satisfy this Relationship that that we identify then that is sufficient for the the capacity to be less than the conditional mutual information Okay, so I know I glossed over exactly what this relationship is But it's probably easy just to give an it's an intuition with an example And so the first is that if we were to mix an uncorrelated distribution with a perfectly correlated distribution and then we give Eve The the valve we tell Eve which which distribution is which? Okay, so Eve knows which one is uncorrelated and which one is perfectly correlated So for example if it's if it's a three by three distribution when z equals zero that they're they're perfectly correlated when z equals one it's it's they're perfectly uncorrelated here and you can apply our our theorem to this and the theorem is actually the spirit of it is very similar to the one-way scenario where you want to try to identify Holes in your distribution Where it's such that in one plane you don't have a hole and the other plane you do have a hole by a hole I mean a non-zero event, but it actually is a hole when you draw it like this and and so for in this particular example you can always find a hole when it has this this this form of a Correlated distribution with an uncorrelated one and so this is sufficient using our theorem to show that the two-way capacity is less than the conditional mutual information Another nice example is when the the the size is small so when Alice and Bob's Variables range over just zero and one they're binary Then this this whole theorem we don't even need to introduce this this notion of the the arrow black arrow We just have that if you can find a hole in between any two conditional distributions then Then you have that the Key rate is less than the condition of each information Okay so This is there's still ongoing work to try to try to sharpen this a bit because this is this is a Necessary condition, but it's not sufficient and But but nevertheless this is able to to to prove quite a nice number of instances that that actually your rate is Below the the helper rate Okay, and so then another thing that we considered in the paper is communication dependency and So we wanted to know when does the direction of communication matter Matter for optimal key distillation, and we also wanted to know when the number of exchanges matter so the the first example that we have here is a distribution where Eve's variable is runs from zero to two and If you recall this is one of the things we showed that you can write the the one-way capacity in this form here And in order for for this to equal the conditional mutual information these terms must vanish Okay, but it's it's not a very difficult thing to show if you analyze this particular distribution that you can't find variables you and V that Satisfy this Markov chain and also cause these information terms to vanish Okay, so that what that tells us then is that the the one-way key rate is less than the conditional mutual information if Communication is from Alice to Bob However, it's very easy to see that you can actually attain it if the communication is from Bob to Alice and the reason is because The only place where they have correlation is when z equals zero right here There's there's no correlation in these planes and Bob's able to determine that just by looking at his variable So if it's zero and one he announces to Alice and then great They know they have a shared bit if it's two they just discard it and that happens with probability one-third So this is just a very quick example to show that the communication definitely does that the direction of communication matters if you want to attain the optimal key rate Okay, another example then is there are distributions where it required two-way communication to attain the conditional mutual information and this is essentially an extension of the previous example Except what we do is is we add a few more terms for for Eve and then we just permute it so this is now Eve has her range is five and We switch we switched it so here for three We just swapped Alice and Bob's one variable and for two and four we swapped again Alice and Bob's one variable so you can run the same argument and Then you can show that the one-way rates are less than the two-way rates Okay, so just to wrap up here and Discuss some conclusions and future research so we were able to compute the capacity formula For no communication shared randomness an open question is what if you allow a sublinear amount of communication? Does that strengthen the the key rates? And this is a scenario related to something known as the entanglement of purification in quantum information theory Which I won't get into but it's that's motivated from from quantum information and another thing is that we were able to to deduce That's the the question of when the the key rate is equals to the conditional new information for one-way communication is Just done by single copy analysis, right? Everything here is just single copy can the same be done for two-way, right? Do we need to actually consider and copies the structure of those distributions or as one copy analysis sufficient? And then finally we would really like to know for when the the two-way rate is equal to the intrinsic information, right? We consider the conditional new information, but but you can strengthen that and say when when is it equal to the? Intrinsic information and this this question is is motivated by a Related paper of ours on a topic called secrecy reversibility. This is actually what motivated the whole project here Was this question here, and I won't get into it, but there are details at that that paper It's an analog to entanglement in reversibility Okay, so I thank you very much for your time