 lecture is going to be about designing quantum algorithms, so kind of different topic from what we saw so far, but still it will be heavily connected to this adversary method that we discussed last time. So the focus of today's lecture is the following one. So at first I will kind of argue that the adversary better that we saw last time can be formulated as semi-definite program. This is kind of easy to see, and then by taking the dual of this SDP we will see that there exists an algorithm that will achieve the optimal value of that SDP, a quantum query algorithm that achieves the optimal value. And so in particular it would implies that the adversary method in its more general form is always optimal with respect to quantum query complexity. So this is quite a beautiful result regarding quantum query complexity. As far as I know there is no equivalent statement for randomized query complexity for instance. Okay so let's start with the SDP view point. So this is what we saw last time. We proved that the quantum query complexity was at least this adversary value that depends on the function f that we want to compute. And it was expressed as a maximization over all symmetric matrices gamma with real entries either can be positive or negative. And that satisfied this consistency constraints which is that the xy entry has to be equal to 0 if f of x is equal to f of y. Okay if you don't have to distinguish the two inputs just take a weight equal to 0 on those particular coordinates of the matrix. So my first claim is that this is actually an SDP so let's just go over it to try to kind of transform it to something that looks like more to an SDP. So first we can replace this maximization over all the spectral norm of the gamma i. So if you remember gamma i was the gamma matrix in which we only keep the entries for which xi is different from yi. Okay so equivalently we can have an additional constraint which is for all i the operator norm is at most one. Okay this is completely equivalent and similarly we could introduce an additional variable epsilon a real number non-negative real number and at the constraint that the spectral norm of gamma is at most epsilon. Okay so those two constraints can be written down in the following form using the order over positive semi-definite matrices. Okay and this is essential this is an SDP this is much easier maybe to see that this is indeed an SDP. So now that we know that this is an SDP let's try to look at it at the dual so strong duality is gonna hold so I'm not gonna do the proof of what is the dual I'm just gonna give you the result and we'll go over it. So the dual is written on the right it's gonna be a minimization over n matrices with complex entries so v1 up to vn that means there is one matrix for each possible query index. Okay and given those matrices we want to maximize over all the possible input x the summation of vxxi. Okay so we take a particular position on the diagonal and we sum up over all the n matrices and this is what we want to minimize. Okay so what are the constraints so first we have this first constraint which say that if we fix x and y so a particular position in those matrices so the dimension of the matrices is 2 to the n times 2 to the n so if we fix a particular position and if we take a summation over all the vi and just keeping the i for which xi is not equal to yi then it must be equal to 1 if and only if f of x is not equal to f of y. Okay and the matrices vi they have to be positive semi-definite. Okay so this is the dual again and it achieves exactly the same optimal value as the primal. So let's try to modify it again a little bit in a form which will be more convenient to manipulate later on because the goal at the end is to come up with an algorithm achieving the optimal value. So let's look at this positive semi-definite constraint and so what does it mean? It means that if we look at this inner product for all vector y it has to be non-negative it is equivalent and another equivalent formulation of positive semi-definiteness is to say that each matrix vi is PSD if and only if you can write it as a gram matrix. Okay so we have a bench of 2 to the n vectors of dimension 2 to the n and each entry xy is the inner product between the vector index by x and the vector index by y. Okay so using this equivalent statement we can have the following new formulation of the dual which is now instead of minimizing over PhD matrices what we do is we minimize over a collection of vectors of dimension n. Okay so those vectors denoted by w xi there is one for each possible input x and for each possible query index i. Okay and by using this gram matrix formulation we have the following thing so what we want now to optimize is this minimum of the max of the sum of the norm of all the sum of the squared norm of those vectors. Okay and the constraint that we have is if we take if we look at the sum of the inner product over all i for which x i is not equal to y i it has to be 1 if and only if f of x is not equal to f of y. Okay just by using the gram matrix viewpoint you can easily arrive at this formulation. Okay so the program on the right will be the program that we will start from in order to design a quantum algorithm achieving this optimal value. So let's move to the quantum algorithm and this is the main statement that we will prove. So we have this dual SDP and the theorem say that if you pick any feasible solution it doesn't have to be the best one if you take any feasible solution then there exists a way of converging it into a quantum query algorithm that computes f with success probability 2 third and by using a number of quantum queries which is equal to the value of that solution. Okay so in particular if you take an optimal solution then what you will get as a corollary is that qf is a constant factor equal to the adversary bound. So is there any question on this statement? Okay so let's go over the construction of this algorithm. So let me first give you what will be the high level idea, the type of technique that we will be using to design this algorithm and then we will instantiate those techniques. So the primitive that we will be using is the so-called angle detection algorithm. So this is something based on the quantum Fourier transform and phase estimation. So let's look first at a simple example. So imagine you are given as input an integer t it's going to be the number of quantum query that you are allowed to do and you have control access to a black box unitary that is either the identity or it is a rotation by an angle pi over t. Okay so black box access means that you can just apply this unitary t times but you don't have access to the internal machinery of how it is implemented. Okay and the goal is to decide in which case you are so either identity or rotation. So there is a very simple algorithm which consists of using two registers. On the first register you apply a Hadamard and then control on the value contained in this first register you apply ut times. Okay so if you have a zero you apply ut times on the second register you apply identity and if you have one you apply ut times. Okay at the end you again apply the Hadamard on the first register. So it's quite simple to see if you do the computation that if you was equal to the identity then at the end you would get the vector the all zero vector and if you was equal to the rotation you would observe a one in the first register. Okay so by t application of this unitary you can decide in which case you are. Okay so the idea is to kind of rotate when you use the rotation you rotate until you can perfectly distinguish between the initial state and the rotated state. So the way we will use this idea is as follows. So in our quantum algorithm this identity case in some sense will correspond to having f of x equal to 1. The rotation will correspond to having f of x equal to 0 and this number t will correspond to the value of the feasible solution. So the number of quantum queries that we have access to. Okay so in some sense we will have to come up with a unitary you that satisfies those two statements. So still this is a very simple like this is a simplification of what we will really do at the end. So let's look at a slightly more general scenario which again you want to distinguish between identity and rotation but now in the second case where you have a rotation the angle is not necessarily equal to pi over t. You just know that this is at least pi over t. Okay so it can be much lower than that just it is at least pi over t and again you want to distinguish between the two cases. So it's quite simple to generalize a previous approach for that what we can do is we can use quantum phase estimation with t steps of computation. Okay so if you play as input an eigenvector of the rotation and if you apply quantum phase estimation then if you have the identity nothing will happen. So the second register that contain the phase estimate will be unchanged and if you have if you do have the rotation then the second register will contain an estimate of the phi up to a shift of about 1 over t. Okay because you are running t steps of phase estimation and now because you have this promise that phi is at least pi over t then this phase estimate has to be non-zero. Okay so you can detect in which case you are even if the rotation angle is not exactly known at the beginning. So this will be closer to what will apply in our algorithm. Okay so this quantity is going to be non-zero so this is what we can detect between the two cases. Okay so now how do we construct this unitary U? So this is how U will be obtained. It will be based on the well-known Jordan's lemma. So U will be chosen as a product of two reflections for two particular projectors pi and delta that we will construct later on and Jordan's lemma tells us that any such operator can be blocked diagonalized in the following way. So you will have a block of plus minus identity and then you will have blocks of rotation by certain angles. Okay so in some sense this is the same situation as before except that you have blocks of identity and rotation. Okay and the algorithm at the end will kind of work in superposition over all those eigenspaces. Okay so when f of x is equal to 1 we will have a starting state which will be mostly in the identity subspace and when f of x will be equal to 0 we will have a starting state which will be in those blocks of rotation with sufficiently large rotation angle. Okay so it will be the same as before but then in all those blocks simultaneously. Yes yeah so here I'm just assuming that it is given as input. Okay now in the application we will know that we will have some vector that we will know how to construct and this vector will be supported over eigen vectors that satisfy the good property of being in the right subspaces. Okay so this will be given by the construction. Okay so what remains to do is to construct this unitary U and to analyze its eigenspaces starting from the dual SDP. Okay so I'm just giving you the dual SDP again and for starting the proof what we do is we take any feasible solution so it's a collection of vectors of size 2 to the n of dimension 2 to the n and we let t be the value of this feasible solution. Okay and the goal is to design a quantum algorithm that performs loosely t quantum queries for solving the problem for computing f of x. So first this is a Hilbert space over which the quantum algorithm will operate. So this is like the all the possible state of the quantum memory. It's going to be superposition over those basic states. So we will have a special state denoted with a star symbol and then we will have all those states in which you have IB so query index in some sense and boolean value B and then the last register will contain a vector w of size 2 to the n. Okay so the algorithm the state of the algorithm at any time of the computation will be a superposition over those particular basic states. So now we define two important objects for the proof. So those are two families of vectors indexed by x so indexed by all the possible inputs. So we have the vector tx plus and the vectors tx minus. These are not unit vectors but it's okay. So in the first case we have a superposition over the star the star symbol and then all the ix i and those state wx i which are provided by the quantum by the original SDP by the original feasible solution. Again for the tx minus it's quite similar except we have a minus. The amplitude in the middle is different so it's minus square root 3t and then the xi is replaced with 1 minus xi. Okay so before moving forward let me give you some properties about those about those vectors which will be useful for later on. So a first useful property is to look at the inner product between tx plus and ty minus for two possible for x and y. So we have the following claim which is if you look at this inner product so let's look at ti plus for any y and tx minus for any x. So this is going to be equal to 1 if only if f of x is equal to f of y. Okay so what it means so if f of x is equal to f of y then you have something like that. So the two vectors will be aligned so you will have tx minus and ty plus because they don't have the same size but it's okay. And if f of x is different from f of y that will be orthogonal. Okay so let's prove it it's very simple but it's gonna use the constraints provided by the SDP. Okay so the star state will give us so this is equal it will give us a 1 contribution so we will have 1 and then minus so the 1 over square root 3t and square root 3t will cancel out with each other so it will just remain a summation over all i. Okay so now you can observe that the inner product of ixi with i yi with minus with 1 minus yi will be none zero if and only if they differ on coordinate i. Okay so it will be a summation over all the indices for which xi is not equal to yi. Okay again it is because in the definition of t minus we took 1 minus xi in the second register. Okay so what remains is just the inner product of wxi with wyi. Okay now if you look at this quantity it's exactly something that appears in one of the constraints of our SDP and it must be 1 if and only if f of x is different from f of y. Okay so by using the constraint of the SDP this must be equal to 1 minus okay I'm just rewriting this equality you get the statement of the claim. Okay so let's move forward yes you could have taken what yeah so far it's not important it will be important later on okay it would be important because the quake complexity of the algorithm we will depend on the norm of those states and for instance the norm of the right state is rousality. Okay so the norm in some sense is encoding the quake complexity here it will encode some property about like how precise we have to grant quantum phase estimation. Okay so now let's define the projectors that will be used to apply this Jordan's lemma and this angle detection algorithm. The definition will use those vectors t plus and t minus so this is the definition. So first the projector so we have first a projector pi for each possible input x and it is defined as follows. It will project onto the state for which we have either a star or we have a consistency between i and the value contained in register on the second register. Okay so it's kind of like a consistency check so this projector depends on the input and if you want to implement this projection you have to do one or two queries to the input okay so it will cost you a constant number of quantum queries. The second projector delta is a projection over the span of all the t plus vectors but restricted to the input y for which f of y is equal to 1. Okay so the second projector does not depend on the input. Okay and now this is a product of reflection that we will be using in the algorithm so ux it is indexed by the input of the algorithm and it is a reflection through the projector delta followed by a reflection through the projector pi x. Okay so again it's important to notice that if you want to implement one application of ux you have to use two quantum queries. Okay so if you need to apply t times ux you will need to you will need to do order t quantum queries. Okay so we will do the analysis of this product of reflection but maybe before that let me give you some few more properties about the relation between those projects those projectors and the vector t plus and t minus. Okay so the first claim that I want to write is when f of x is equal to zero then the vector tx minus is in the kernel of delta. Okay so delta of tx minus is equal to zero. Okay and second statement if you project tx plus on to pi x you will get the star state. Okay so if I draw a picture it means that in some sense so let's imagine this is delta okay you should imagine it's in higher dimension but this is delta this is pi x. Okay then here you would have tx minus so tx minus is orthogonal to delta and if you look at the projection over pi x here you would recover the star state. Okay so the proof is immediate I'm not going to write it down it's just followed by just followed by the definition of tx minus. Okay so for the first point why is it in the kernel because delta is projection over the span of tx plus and we know by the first claim that when f of that in that case it will be orthogonal to all the vector in the span of delta. Okay on the second point this is just a simple calculation using the definition of pi x and the fact that the vector tx minus do not satisfy the consistency constraint so just the star state will remain in in the projection. Yes it's one minus xi it's like the complement of xi so xi is just a single bit so you take the complement. Okay so this is when f of x is equal to zero now let's look at the case when f of x is equal to one so if f of x is equal to one then we have that tx plus is a one eigenvector of ux. Okay so if you apply ux the product of the two reflection on tx plus it will be equal to tx plus so it's a one eigen vector. Okay so why I mean this is also simple to see but you can do the computation so why is it the case so first if you apply delta on tx plus so by definition delta project on to the span of all the tx plus for which f of x is equal to one. Okay so you are in the span of tx plus so this is equal to tx plus. Okay and similarly if you now project on to pi x plus pi x okay you can just look at the definition of pi x it will like if you apply tx plus it will just be the same as as output. Okay so keep those two properties in mind that will be very useful for later in the proof so this is now the main statement that we will show and once you have that this will essentially conclude the proof of the correctness of the algorithm. Okay so we will prove the following thing and this is all you need to apply those angle detection algorithm. So let's define a project of p theta that projects on to the eigen spaces of this ux with eigen value smaller than theta in absolute value. Okay so remember this Jordan's lemma you have blocks of identity and rotation so here we just project on to the eigen spaces for which the rotation angle is smaller than the value theta okay theta is any parameter and now we have two lemma that suffices to conclude the proof. The first lemma is when f of x is equal to one so if you remember I said that we will be close to the identity case so here it will be that the star state will be mostly supported over eigen spaces for which the rotation angle is zero. Projection according to p0 so it means this is like identity up to a small error term which has a square norm of at most one third. Okay so we can have a small failure probability but it's okay and now when f of x is equal to zero we have kind of the opposite because now the star state will be mostly supported up to some small error term over the eigen spaces for which the rotation angle is at least one over two t. Okay so we have like either a rotation angle zero or a rotation angle of at least one over two t. Okay so once you have such lemma what you can do is you can just take the star state as input you can run quantum phase estimation you measure the phase estimate if it is zero then with high probability you are in the first case so you output f of x equals zero and if the phase estimate is non-zero then you are in the second case and so you output one. Okay so coming back to the question that we had at the beginning about this eigen vector I mean the star state is playing the role of this eigen vector so it's like a superposition over eigen vectors supported over eigen spaces with a large rotation angle. Okay so let's do the proof of those two of those lemma and it will conclude the analysis of the algorithm. Okay so for the first lemma we what we want to do is we want to show that the error term has a small norm because this error term this is a distance that you are from the right subspace so we just want to prove that this error term is of small norm. Okay so by definition it is equal to the norm of identity minus projection onto identity eigen space applied on the star state. Okay so now we will use the definition of tx plus which is equal to the star state plus this plus this superposition so we can replace the star with tx plus minus summation over the ixiwxi. Okay and now observe that when f of x is equal to 1 we prove this statement so when f of x is equal to 1 we show that this was a one eigen vector of ux so equivalently it means like an equivalent statement is to say that if you apply p0 on tx plus you get tx plus. Okay this is exactly the same thing. Say that this is a one eigen vector. Okay so if I replace the star state with tx plus minus this summation the tx plus will vanish because of this claim. Okay so what will remain is just the summation that we have in the right hand side of the definition of tx plus so it will give us 1 over 3t then the norm of identity minus p0 and then the sum over ixiwxi. Okay so now I can just remove this identity minus pi0 by using contractivity and I can just keep the summation over all i so this is at most 1 over 3t and then a summation over all i of the norm of that and the norm of that is just the norm of wxi. Wxi you see like it's a definition of tx plus. Okay and now just by assumption at the beginning we pick a feasible solution of value t so this summation is at most t and so the whole term is at most one-third. Okay so it's not that we are at distance at most one-third from this pi0 eigen space. So this is for the first lemma. Now the second lemma is to prove that when f of x is instead equal to zero then we are far from such an eigen space. Let's prove the second lemma. So for the second lemma we will be using this previous claim and this lemma is going to be a bit more subtle. So we had this picture which is when f of x is equal to zero then the vector tx minus is orthogonal to the support of delta and its projection on pi x is equal to the star state. Okay so let's imagine that let's denote this by let's denote the angle between pi x and theta to be on delta to be theta over 2. I guess I'm just working in a single two-dimensional space but because of Jordan's lemma you could generalize it that to a block data generalization. So what can we say about what can we say about the length of the star state with respect to the length of tx minus given that the angle is theta over 2. This is just some simple trigonometric identity and what you can say is the following thing. You can say that the norm of the star state should not do it that way. Okay let's do it that way. So we can say that the norm of the star state is equal to sin theta over 2 times the norm of tx minus. Okay so this is sometimes called the effective spectral gap lemma. It gives you like a condition it gives you a statement about the length of a vector depending on the angle between two different projections. Okay so how do we use that in the proof? So again it suffices to bound the norm of the error term okay which in that case will be what we get when we project on to the eigen space with rotation angle at most 1 over 2t the star state. Okay so now the star state was equal to the projection of tx minus over pi x so this is equal to p 1 over 2t pi x tx minus. Okay and now we can use we can use this trigonometric identity to show that this is going to be at most the sin square of 1 over 2t let the factor 4 I think times the norm of tx minus square. Okay so again why do we have such a thing? We only care about the eigen space that corresponds to rotation of an angle at most 1 over 2t. So we have this picture over many different eigen spaces but each of which has an angle of one of at most 1 over 2t. So theta is at most 1 over 2t and now because of those properties we have this relationship between the length of the star state and the length of tx minus. Okay so this is mostly all we need to conclude the proof so let me maybe continue here so we just want to bound this last quantity so the sin square we will replace it with just the square of the inner quantity so it's going to give us 1 over 16t square. Okay and what is the length of tx minus? Well it's 1 coming from the star state plus 3t times a summation of the squared length of the wxi. Okay and again by using that we have a feasible solution of value t this is at most t. Okay and now it is easy to prove that this is at most 1 over 3, 1 over 3. Okay and this concludes the proof of how to derive a quantum query algorithm from the dual SDP. Is there any question on this construction? No so actually okay what we do is we look at the projection in practice there may be many two-dimensional space so we do analysis for the projection of each of those two states in each two-dimensional space. Okay so like you would have to project this into the corresponding two-dimensional space and same thing for the star state. Okay so they are supported over all the I mean over the entire Hilbert space and we do the analysis block by block of dimension two. Okay so if there is no other question I would like just to conclude on a few consequences of this construction so of course the next question is can we use this kind of construction to arrive at new quantum algorithm or is it just like something which is hard to apply? So there has been different attempts of kind of giving maybe a more simpler framework to work with using those ideas to design new quantum algorithm. One such equivalent formulation is the so-called quantum span program which gives you like a different way of designing optimal quantum query algorithm and so these are still quite complicated to use in practice but they have some very nice properties such as the ability to compose very easily so they were used for instance for finding quantum algorithm for formula evaluation and another extension of this framework that I would like to tell a few words about are called learning graphs. Okay so this is something which was invented by Alexander Beloff and this is based on this on the construction that we just show before extended to give you some more natural approach for designing quantum algorithm. So those learning graph what's nice about them is in some sense the feasibility condition of the SDP will turn into some flow condition over some network and this would be much simpler to reason about and indeed there has been some very nice quantum algorithm designed using this learning graph framework so for instance for triangle finding an order of graph containment problem and also for cady's thickness and so on so it's quite a fruitful way of thinking when you want to design quantum algorithms. Okay so we just try to define you what is this framework and we will see how it relates to the dual SDP construction that we just show before. Okay so what's the learning graph? So a learning graph is defined as being a subgraph of the power set graph over 1 to n. What does it mean? So n again this is a size of your input a number of indices that you can query and the power set is just a set over all the possible subsets of indices in which you have an edge if and only if one set is included to the next one and the size differ by one. Okay and the learning graph is a subset of this large power set graph so we can we just keep some edges we can decide which edges we want to keep we always have to start with the empty set but we don't have to take all edges so secondly we have to choose some non-negative weights on the edges so one weight for each edge okay and then there is this quantity called a one certificate we kind of already touched this quantity into one problem session so what is a one certificate? So this is something which is defined for all the input x that evaluates to that evaluate to one and one certificate is a subset of indices such that if you fix if you know the value of those indices then in some sense it determines the output of the function okay so let me write it properly so we say that a subset of indices is a one certificate for x okay so it depends on which x you pick so x has to evaluate to one if you have f of x equal f of y equal one for all y such that xs is equal to ys okay so if you take any other input y that agree that agrees with x on the indices containing s then it must evaluate to value one okay it kind of certifies that the output has to be equal to one okay and we have this condition in the definition of the learning graph which is the learning graph is correct if for each x there is at least one certificate contained in some node okay so for any x somewhere in the graph there has to be a one certificate okay that could be more than one but there has to be at least one okay next object and last one that we need for the definition we will look at flows over this directed graph so what we do is we take a unit flow so we inject a flow of value one in the empty in the vertex label by the empty set okay and this flow have the property that for each node what comes in has to come out okay and the only vertices for which something may not come out so the things they have to be one certificate okay so the only possible things of this flow must be one certificate okay and if the flow satisfy this property we say that it is a valid flow so of course you may have many possible valid flows over over this graph so now given all those definition this is what is the complexity of this computational model so the complexity is the square root of two quantities m0 and m1 m0 this is just the total weight of the graphs or summation over all the possible weights and the value f1 this is a maximization over all the possible input that satisfies f of x equal 1 of the best possible valid flow minimization over all the possible valid flow of this quantity so you take a summation over all the edges of the value of the flow on that edges square divided by the weight of this edge okay and you say that such a learning graph computes your function f with this complexity so it seems to be quite far from quantum computing and quantum algorithm but there is this nice theorem which tells you that actually the quantum quite complexity is always at most the square root of m0 and m1 okay so here it's not exactly equal as before there may be some large gap you may lose something using this different computational model but as I said for many problems of interest it turned out that it will give you some non-trivial quantum algorithm so I'm not going to prove this into detail it's not that complicated just going to give you a hint about the construction so the idea is to reduce to these dual sdp so for that we just have to come up with a feasible solution to the sdp whose value is the square root of m0 m1 okay so this is what's what's the feasible solution should be if you want to prove the statement so the construction depends on whether f of x is equal to 0 or 1 it's not very complicated to show that it does achieve the right optimal value and the crucial argument in the proof in showing that this is indeed a feasible solution is the following one so if f of x is not equal to f of y remember that this summation of inner products has to be equal to 1 this is one of this was one of the constraints of the dual sdp and now this constraint will be satisfied because if you do the computation you will see that this summation over inner product is exactly equal to the flow that goes through a particular cut of the graph one side of the cut correspond to a subset over which x of s is equal to y of x and the other side correspond to x of s being not equal to y of s okay and because you have a cut there is a flow of value 1 going through the cut so this summation over inner product will be equal to 1 so this is really the crucial property that you you can use to prove that this is indeed a feasible solution to the to the dual sdp and implying that qf is that more the square root of the value of is that more the value of this learning graph so maybe very quickly let me just give you one example of how to apply this framework for designing a quantum algorithm in case you want to solve the following problem which is the collision finding of this the decision version of the collision finding problem so here you have an input x of size n okay the input is made of is defined over a large alphabet not over binary numbers but you can generalize the framework to such such a case and you want to distinguish between two cases either it is a one-to-one input meaning it is a permutation each symbol will appear only once or it is a two-to-one input meaning each symbol appear exactly twice in the input okay so what learning graph can we use in that case well this is what we can do so we start with the empty set then we progressively add the integer one up to m to the one over three okay so we have a path of length closely n to the one-third and then we add all the remaining elements one by one in different subsets okay so this is gonna be the choice that we that we make now for the weight when we are in the initial path the weight will be pretty large m to the two-third and when we are in the last part of the learning graph the weight will all be equal to one okay so m zero the total summation over all weight is equal to loosely n now we have to define a flow okay so the flow has to be as has to be has to be defined for each possible input x for which the output has to be one so we want to output one when the input is two-to-one okay so meaning that each coordinate each value appears twice in the input so we want to define a flow with that gives us a small value so this is how we define the corresponding flow there are two cases either there is already either we already saw a same value twice in the initial path in which case we can just take a flow that has value one for the initial path and then value one for any of the last edges okay so let's just pick one for the top edge okay and this is indeed a correct flow because in that case a certificate is the position of two indices that are equal to each other and if we if we saw a collision among those first n to the one-third values then in particular we have a certificate contained in this subsets okay so now second case what if we didn't see a collision when we look at the value whose indices are between one and n to the one-third and in that case we use a different flow in which we have value one in the initial path and then we put a value one over n to the one-third for each subsequent edges that go to a collision okay and we know that there will exist n to the one-third such edges because for each of the n to the one-third initial value there will collide with some new index given by one of those edges okay and if you look at the value of given by this choice of flows you will get that m1 is loosely one over n to the one-third and so the complexity of the corresponding quantum algorithm is order n to the one-third okay so I mean we already knew quantum algorithm for solving collision finding based on Grover search but it just illustrates how to apply this framework to the collision problem and as I said it can be applied to much more complicated problem using some modification and generalization of learning graph okay I'm going to conclude the presentation thank you