 There was T major for discrete optimization, sampling rare solutions via algorithmic quantum mining. So it's that. Thank you. Thank you. So above or else I have to thank the organizers for putting together that wonderful meeting and for giving me the opportunity to be here at ICTP and I would also like to well have to acknowledge my collaborators on that project in particular are the massive machining with whom we spent many hours discussing these topics. So the paper that I'm going to present, yes closer, okay, so the paper that I would like to present has two angles to it. One deals with the notion of diversity of solution in the context of discrete optimization. And then the second part is some ideas about potentially how can you improve your diversity in the context of quantum mining. To give you some motivation why you make a, so there's a few points. So one, if you have your problem Hamiltonian then in many cases it may be just an approximation of your true problem that otherwise can be noisy or hard to express. And in that case the true ground state of the problem Hamiltonian may have well little meaning for the true problem. And there is a closely related issue with multi-objective optimization where you may want to have distinct solution to then be able to impose some additional constraint to select something that satisfies some additional constraint. Also some another angle, so diversity is a key hyperparameter in many heuristic algorithms such as genetic algorithms or evolutionary algorithms. And we could continue that list if we had more time, so but let me jump, well now we could try to define diversity in many ways but we think that we should encapsulate two ideas here. One is that you want to focus only on solution that are of quality. And in some sense that's easy because we just limit ourselves to the ideal to have solution below some energy threshold or solution with a small approximation ratio. And the second solution should be usually independent. And what I mean by that here is, well we need some kind of a notion of distance between solution and then you can just use the Hamid distance or some kind of refinement of the Hamid distance that we use in our low dimensional examples. So here I would be always normalizing it by the problem size. And we are just saying that the two solutions are independent if they are far away, sufficiently far away according to that metric. And the idea here is that if you have a seed solution then the standard Monte Carlo quite likely can be extremely efficient in something, the Bayesian of attraction, but it may be extremely hard for Monte Carlo to jump to some other Bayesian of attraction. So jump from one state to another state, well using some local moves. And here we just combine those two ideas and just propose to define diversity as, well, the size of the maximal set that satisfies those two conditions. So once we want to have all the elements of that set to be of high quality and each element of that set should be independent according to that metrics from all the others. So in practice this translates to a max click problem. So there's a lot of caveats here. So one caveat is that, well, that measure is not that easy to calculate for one hand that max click problem itself is NP-hard. So what we do in practice we just find some, use some greedy algorithms to approximate it from above. This works very well. Well, more general fundamental concern is that all that I'm talking about here is defined up to the best knowledge of the solution space that we have, which is even true even if we talk about approximation ratio and you may not know the exact density. But if you have that framework then, well, you can do many things. So you can, for instance, define diversity ratio as for a given solver just to see how many diverse Bayesian of attraction from the best knowledge or the pool of all solvers that you have, how many you can sample, or you can use that framework to just compare to samplers or to solvers against each other. It's actually something like that was done in the paper that was parallel to ours done by the D-Wave team, where they actually show that according to that metric quantum annealer was showing some quite huge advantage in tying to diversity against the PT plus ICM thing. At least two classes of problems that we studied and it was equal in the first class. So all this is, I think, very nice, but let me show you also some example. And that is also one class of problems that I will be studying here. So this work we consider to the lattice with nearest neighbor random interaction and some small local fields. And what I'm showing you here is just the grand state configuration. But apart from the grand state, there are also some distant low energy states, or within the approximation ratio of 10 to minus 3. And to jump from the grand state to another low energy state, you have to flip all the spin which are in that droplet. And so according to that definition from the previous slide, we will have six such states, each is at last distance from one another. So we have diversity six. So what you see here is, well, some effectively high dimensional subspace. And we can try to compare that measure with, I would say, standard measure of dissimilarity of solution in spinblasted, that is, Parisiode parameter. Well, we sample from exactly the same instance and pretty much the signal that you are getting is completely washed out. So, but what is happening here that you are just that the distribution of overlaps is giving you a one dimensional projection of some high dimensional effective structure. So because of that it is not able to pick up any signal here. And for the reference, well, the problem that we study are selected in such a way that we have a very good idea about what is the low energy subspace in that problem. And for that we use some branch and bound technique which is combined with tenson network contraction of the partition function of such a 2D lattice. So that has, well, many limitations to execute that routine. You need to be basically two dimensional or quasi two dimensional. But the nice thing about it is that when we run the branch and bound we can use that locality to additionally build the full structure of hierarchical structure of excitation on top of the ground state. So in one run of algorithm we are getting all the information to pretty much extract it. And it works extremely well for the system sizes that we have here. Okay, so with that I can jump to the second part of my talk. And here the idea is of inhomogeneous driving fields. So we have our driver here and notice that I put the driving term under the sum. So the driving term is not only time dependent but also position dependent. So what we are aiming at doing here is we kind of want to grow a bubble of new phase into the old phase. So you have a front that in time is expanding when we are effectively switching off the transverse fields. And well, we have the control parameter that is the slope of the driving. And well, there are two ideas here. So one idea is that, well, if that slope is zero, then that is your standard homogenous quench. And this value of that slope is allowing you to control the size of potential size of quantum fluctuations. So if you have everything is homogeneous, then potentially you can have quantum fluctuation over full system. And if the slope is more steeper than, well, you're limiting in space. But there is another idea. Another idea is that, well, part of the system is crossing the transition before the another part. And if the velocity of that front is slow comparing to the velocity at which the information can spread in the system, then that can bias the spin that crossed the transition can bias the spin that are crossing the transition and allowing them to adjust just to them. And so we have two limits. One limit is that alpha is zero or approaching zero does homogeneous transition. The other limit is we have extremely steep front where you basically go one speed at a time. And both those limits are not so good. But quite often it turns out that there is some intermediate value of that control parameter that gives you some boost. Because that showed up yesterday, let me just very briefly mention that if you just use the uniform transfer thing, icing chain, then such a protocol gives you a pretty robust shortcut in the square root speed up in the adiabatic time. But for more complicated system where there are some evidence that, well, in some cases it can allow you to avoid griffin singularities. And then you have exponential gain. There are some works from the group of Professor Nishimori in the first order transition. Well, they also, in some cases, the exponential gain. But also that is a protocol that you can run on d-wave annulus. And well, again, there was some order of magnitude improvements in some problems that were presented in that paper. But let me show you here the example that is the second class of problem that I have. And that is something like a quasi-wondering mesh chain. But with random interaction and such that each spin has six neighbors. So that may be frustrating. So what I'm showing you here is just such a homogeneous protocol. So we have a control power meter. So that's the standard quench. And that's the blue line. You see that it is flattening on the log-log scale. So we are hitting exponential barrier. That's the median of this dual energy. The red line is very steep front, relatively steep front. And it is not good either. But you see that for some intermediate value, you'll get some noticeable improvement in the residual energy. So that's nice. Well, and that's exact simulation of the Schrodinger equation for such systems. We can do it in the quasi-wondering mesh asset. There's one more point that I would like to point out here. We also plot the optimal solution for each instance. That is kind of cheating here. But what I'm showing you is that that optimal alpha may be instant dependent. So what I will be doing in all the work, we will always have a portfolio actually of slopes. And we average over the portfolio. But in the standard way, when we calculate time to whatever, then we optimize over any link time. But that was kind of a basic test that gives us some hope that we can move with those ideas to frustrated systems. But the actual thing that I want to present is based on that work we had with Mastered Machine two years ago that was introducing the idea that you don't have a single front, but you have many fronts. And that is to avoid some shortcomings of a single front that is not doing anything with big part of the system at most times. And here, there is a penalty to pay. And the penalty is that if you have some fronts and then you try to collide them, then quite likely you will get the localized defect. But if somehow you can approximate the borders, well, that defect is not costly to you, then that may be a good schedule. So we are trying right now to combine all those ideas that we have. So we run some experiments where pretty much we use the, we have our examples where we know the rough position of the borders of the droplets. So that's the only, I don't take any information about what is the spin configuration inside. But just divide, that is chemically shown here, divide the two dimensional system into clusters. That roughly coincides with the borders of those droplets. And doing so, we have a portfolio of both possible borders and slopes. And we can go with that, we can run our schedule. So here, there is the quasi 1D setup. And what we have here in the dashed line is the reference homogeneous crunch for the standard solution. The solid line is our protocol that we are playing with. And we have different increasing system sizes. You see that there are very strong finite size effects that for 128 spins you don't see much difference. But actually, when we go to 512, then you can see a noticeable gain. And here what I'm plotting is the median time to diversity. And diversity on the x-axis is, well, diversity ratio. So how many diverse solution from our baseline, our solver, was able to extract. So we have two limits. One limit when the diversity is one, then pretty much we have been able to find all the diverse solution. Of the solver was able to find all diverse solutions. There is another limit. There is that diversity ratio is zero. And that is just corresponding to the standard time to solution. So kind of you have another anger that is in a continuous way going between the standard measure time to solution to pretty much extracting everything that we know about low energy specs. We can play the same game in two dimensions. Here is a path integral Monte Carlo simulation. We have two system sizes. One is 30 by 30. The other is 40 by 40. The dashed line is, again, the baseline. And what we can see here that actually in that example, we don't get much gain in smaller diversity ratios. And small means that, let's say, time to solution here don't give us much in this kind of simulation that we did here. But when we actually we get some noticeable gains when the diversity ratio is larger, and that basically means that our portfolio was able to unfreeze some solution that in the reasonable time are not accessible by the standard protocol. So let me, as a pretty much last slide, second to last, it is actually instructive not to look at, I was showing the median over all the instances that we had, but there is a scattered plot as well. So we have different system sizes, but we have, well, that's time to solution. That's the diversity ratio is 0.8, so pretty high. That is 0.5. And I would like to point out to actually the gray area. Gray area is time out. So we have quite a lot of situation in health instances that neither of the solvers was able to pick up the signal within our simulation time scale. Well, but those instances turned out to be quite helpful. Monte Carlo, or past integral Monte Carlo, and the life is hard. But the interesting part is here then. That is the part where the portfolio is winning. So we have quite a sizeable portion of the instances that the homogeneous is timing out. And the portfolio is suddenly picking up some signal. So finding the solution sometimes in much shorter time scale. So I think, so that's pretty much, I think we're the biggest, we see the biggest gain that we can put with that approach. It seems that sometimes it is possible to get to the low energy solutions that otherwise would be untractable for the standard protocol. So with that, I can basically finish. I hope I was able to convince you that diversity is something that one should care about in solvers or samples. And when we talk about diversity, then what we need is some kind of a reasonable metric to be able to quantify solvers based on that metric. And we propose one of such metrics. Well, and the other, that maybe there are some options or opportunities in combining something like a hybrid solver where you get an instant by instant, instant dependent inputs. You put it into your annular. And because of that, maybe you can not in secret. But because of that, you can improve the quality of the results that you get. And with that, I would like to thank you. Questions? Would you care to comment about generalization to higher dimensions or kind of more kind of fractal or whatever kind of problem, some type of geometry that might be problem specific, whether these concepts would likely be enhanced or reduced? Well, one thing that we have been doing here is kind of a baseline because we have been able to benchmark it. But there was a talk by Maslut on Monday that was eroding on one of the possibilities to find those borders. Well, there is an open game. You can try to see if that works. I can think actually on another example that I haven't tested, but I can tell you why not. That maybe you can just run your sample homogeneously, get some solutions, try to estimate what are the borders, and then try some different protocols. So kind of going there. But that's, let's say, up to the future. So I hope I pick up your interest with that talk. And maybe that will go further. So regarding your last set of results, just a few clarification questions. So first of all, was that done with Quantum Monte Carlo? Is that what you're showing there? Yes, that 2D is Quantum Monte Carlo, and that was done by Sergey Sakov. So pretty much, I sent him the borders here on the simulation. I got the data from him. I made the plus. And the other is 1D is MPS. So that's showing the question. OK, so I'm not sure I understand exactly what all the different ingredients are that you're throwing into this to get these results. You're talking about the inhomogeneous multiple fronts and so on. So what exactly is the list of ingredients that go into the improvement here? At least in the intuitive level to me is that if you have some idea where the low-lying kind of droplet may be, then you can limit your problem to solve it only on the cluster. And it's combined the idea that you have a single front that is maybe good inside the cluster and you can draw something better than homogeneous with the idea that if you are able to get that inside from whatever way you get, then the excitation you are getting at place, I'll just localize it in the way where it is not cost. But then the other fans, suddenly, then are able to pick from different many solutions because you are creating that. Sorry, I think I didn't ask that clearly. So what is the list of things that you do in order to solve this problem differently from just standard running quantum Monte Carlo? OK, so we have, well, just for this solution. So we have our preprocessing. That's it. And we get, let's say, some kind of a boulders where to draw it, draw the boulders. But then all we do is just run the annealing. OK, so you need to optimize these GI of T functions somehow, right? So to get these results, so what we have been doing is we optimize over the total annealing time. That's one thing. And the other things, we have a portfolio. So we are averaging over the portfolio. So in the sense that, apart from the preprocessing, that is not included in time, the comparison is fair. So kind of you have a portfolio, some runs in the portfolio get that solution, some give you a good solution. But on average, from that portfolio, you are getting more than just for repeating the same kind of simple procedure. Go back to Einstein, if you repeat the same procedure and you shouldn't expect to be able to get different outcomes, or possible outcomes, we are kind of expanding the span of protocols that you run. But how do you find your GI of T functions? How computationally heavy is that? And does that go into the time to diversity? That boulders, then we just start to, well, we have one parameter that is that slope. We pretty much, that was random selection from some sample. And then you just grow it out. And you do it, well, you have your total annealing time, this gives you the velocity. The kind of one constraint is the total annealing time. And you just grow it from the center outwards, and that's it. Okay, thanks. Other questions? It's, thank the speaker. Okay.