 Oh, do you hear me in the back? Okay, today I'm going to present our semi-classical approximate optimization algorithm. This algorithm has got two parts. The first part is the classical part and then comes the quantumness. The classical part of our optimization algorithm is known as mean-filled AOA or mean-filled approximate optimization algorithm and it has been inspired by quantum dealing and quawa. It is a polynomial time algorithm and it delivers an optimum that is algebraically close to the true minimum. It scales as a power of one by n where n is the number of spins and it is applicable to any cubo problem that is defined on a strongly connected graph. Now we have a representation as as you do spin coherence state path integral holds classical path corresponds to our classical algorithm. Later, as in semi-classics, we would like to add the quantumness as in we would like to add the Gaussian quantum fluctuations around this classical path. This spectrum, this Gaussian quantum fluctuations are quantized as a spectrum of Lyapunov exponents and they help us discriminate between easy instances and hard instances of a cubo problem. What we did in the context of Schengten-Kirk-Patrick model, it's Hamiltonian is HP where the interaction terms Gijs are standard Gaussian random variables. They are collectively called the disorder of the model. Now there is a very celebrated result by Georgio-Parisian, the context of Schengten-Kirk-Patrick model which states that in the limit of infinite number of spins, the average value of the ground state energy divided by the number of spins is a constant number, which is equal to minus 0.763166. We use this constant, which we term as Parisi constant to benchmark our classical algorithm. So this is our classical algorithm and we have a time dependent Hamiltonian which has got two parts. The first part corresponds to the driver and the second part corresponds to the problem. Here the time dependence is manifested by two discrete linear ramps, which is beta t and gamma t where beta t decreases in time and gamma t increases in time. Here the n i's are the block vectors corresponding to individual spins. Here all the spins start in a state one zero zero that is pointing in the north direction and then they evolve in time according to the classical equations of motion for, yeah, this classical equations of motions for this angular momentum because yeah, and we are simply taking a spin up and spin down, that is basically spin half considerations. Now we applied some algebra on this classical equations of motion and then we came up with this exact equation of motion for individual spins and how could we come to this exact equations as due to the fact that we considered beta and gamma as discrete rather than continuous. Here vd corresponds to the driver and vp corresponds to the problem. Now if you have a look in this particular formula, you can see that there is a loose similarity with the original quava, but there is a crucial point of difference. The point of difference is that our algorithm is much simpler because we are dealing only with multiplication of three cross three unitary matrices from the group of SO3 which corresponds to classical rotation. Now we calculate the mean field at every discrete instance of time for each spin and as you see from this formulas that only the matrix that corresponds to the problem depends on the mean field but not the matrix that corresponds to the driver. Our polynomial time algorithm has a time complexity of p times n squared where p is the number of steps and n is the number of spins. Now since we have a classical algorithm, we don't have any obligation for keeping p small. We take p 10,000, 20,000 and whatever we want. So our main goal was to keep this algorithm polynomial time but not really like how large it really takes. Then we come to the performance of our classical algorithm and here we see that with the increasing number of spins, the difference between the parisic constant and the expectation value, the expected minimum value that is the result of our algorithm that we get divided by the number of spins decreases algebraically. Here in this graph, we have plotted a histogram of the standard score of the true minimum of a problem and we see that this follows a Gumbel distribution. On top of this histogram, we have plotted another histogram which plots the standard score of the minima that we obtained from our algorithm and we see that this also follows a Gumbel distribution, albeit, with decreased skewness due to errors. Now, in this graph, we have plotted the probability distribution of the minimum that we obtained from our algorithm. Here in the x-axis, we plotted the minimum value that we obtained from our algorithm minus the true minimum divided by the average value of the true minimum. This is a quantity that we term as q's epsilon star and in the y-axis, we have plotted the corresponding probabilities. Now, we see that there is a sharp peak around x equals zero due to the fact that most of our runs have returned a value which is equal to the true minimum but there is also a tail distribution and we select a random value epsilon in the tail distribution. Now, we see that our run of our algorithm is successful in case the epsilon star corresponding to the minimum value that is returned by that run is less than epsilon and otherwise we say that the run of our algorithm has failed. Now, we are interested to know the effect of the value of epsilon that the success probability of our algorithm and for that, we analyzed the tail of the distribution and we came to the conclusion that the probability that our algorithm returns an error which is greater than epsilon decreases exponentially with increasing epsilon. Now, this decrement is of the order of root over of n where n is the number of spins. This helps us conclude that the probability that our algorithm will return a value which is algebraically close to the true minimum equals one with exponential accuracy. Now, with that, these are all the performance of our classical algorithm and now that we have got an exact result from our classical algorithm, next we would like to incorporate the Gaussian quantum fluctuations around this classical path and see what happens. So, in order to incorporate the Gaussian fluctuations around the classical path, we need to consider an issue to spin coherent state path integral whose saddle point trajectory is equivalent to our classical algorithm. The saddle point trajectory is equivalent to the classical path. Now, we consider a path integral whose action has got two parts. The first part is the very phase of spins and the second part is our classical Hamiltonian that we considered for our classical algorithm integrated over time. We see that and then we incorporate this Gaussian fluctuations and these Gaussian fluctuations are quantified by a spectrum of Liapunov exponents and in order to get this Liapunov exponents, we essentially solve an effective scattering problem in time. From this spectrum of Liapunov exponents, we can tell apart the easy instance from a hard instance from a problem. Now, what happens in case of an easy instance? For the easy instance, we see that throughout the annealing schedule, none of the values of the Liapunov exponents will be close to or greater than one. On the contrary, when we have a hard instance, we will see that there will be at least one peak in the highest Liapunov exponent whose value will be close to one or greater than one. Now, here in this picture, we show a particular case of a hard instance and here we see that there is a very large peak, a very large third peak in the highest Liapunov exponent. Here we see three peaks, but only the third peak that is extremely large and that is larger than one is of our concern. And we managed to find a relation between this particular divergence and the quantum critical point of transition between Argodic to MBL phase. Now, there is a... ...relation in which the same... Please continue. We'll save questions for the end. Okay. So there is a well-known parameter to characterize between Argodic to MBL phase transition and that is the level spacing statistics. When our system is in the Argodic phase, then the level spacings are distributed according to the Wigner-Dyzen distribution. On the contrary, when the system is in the MBL phase, that is, many body localized phase, then the level spacings are distributed according to the Poisson distribution. Here we have plotted a callback lever divergence between the level spacings that we receive from our algorithm with Wigner-Dyzen and Poisson distribution, respectively. And we see that these two curves cross at a certain point of time. And this point of time is actually the quantum critical point that marks the transition of our system from the Argodic to the MBL phase. And we have observed that this location of this particular critical point of transition, that is the crossover between this blue curve and the orange curve here, is extremely close to the location of the third peak in our Lyapunov expanse. So these are our results from our semi-classical approximate optimization algorithm. And with that, thank you for your attention and this is the time for questions. You have time for questions, so. Yes, thank you very much for the talk. And I just wanted to ask, so you will give this interpretation for this third peak. But to the other peak, also have some relation with transition or? Yeah, they do not have any relation with transition, but the first peak actually, for example, we are starting from a state where all the levels are degenerate. Say, we are starting from a driver state and then we are slowly, slowly adding the problem. And then we will see when the first peak is happening, then all the levels are actually oriented themselves. So it is a reorientation so that you don't really see separate bands in this energy spectrum, but there will be one particular uniform band. And as in the second peak, this actually corresponds to a polynomial gap in the level spacings, but this is not that important to us because we are only looking for the exponential gap or the mini-gap and they do not correspond to any kind of any phase transition. There is a question for you in the chat. Hello. Yeah. Please go ahead, Komal. Where is the... Yeah, so the question we have is that at the place where there is a transition between Wigner, Eisen and Poisson statistics, how do you eliminate that there is not an intermediate statistics which is linear level propulsion and an exponential fall? Did you answer the question? You can answer the question. I could not really get this. Komal, could you repeat your question one more time? Yes, your question. The level spacing distribution makes a transition from Wigner, Eisen to a Poisson statistics at the crossover. How does one rule out that it is actually neither and it's actually the other universality class of intermediate statistics? Well, we do not really believe in such a transition state, I would say. I mean, some people does believe that there is an intermediate state, but in that case, I don't really know how exactly do you classify because I don't, I mean, in order to have this Calvack-Liebler divergence, you need to have a reference distribution and I'm not a ward off any such distribution that of the level spacings that happens in the intermediate transition state. Okay, well, the intermediate statistics are well known for many years and they involve a linear repulsion for small spacings and exponential fall for large spacings. And you can look up literature with some of the words of Bogomolny or what I write is the name in my chat, so yeah. Thank you. Okay, thank you. Any more questions? Okay, then with that, we will conclude and thank you again.