 This afternoon session, so I know that the lunch was a little bit short, so probably it's people arriving But in order not to delay the whole program, so it's a pleasure to have Ludwig Matei with us, and please go ahead Thank you so much Thank you so much. I would like to thank the organizers for setting up this beautiful meeting sort of in this beautiful location first time for me here, and I'm blown away sort of how pretty this is and I would like to share with you some of our recent thoughts centered around Optimal control with regards to implementing quantum algorithms So here's a brief overview. I want to take you this as a starting point our study that was centered around sort of mitigating baron plateaus that occurs in this context and then go on to this This algorithm that we that we are proposing to use which is this pulse engineering via Projection of response functions pepper as we call it and briefly mentioned here I'll be maybe a bit more detailed or so and This year is our experimental collaboration with a group of Chris of Becker and Klaus Langstock and their team We are sort of we utilize these type of ideas in the context of implementing Quantum gates in the context of rootback tweezer system Then I would like to share with you some thoughts about our method called or car sort of exclusive or quantum algebra and Time permitting which I probably won't have a so I might say a few words about This kind of quantum simulation of dynamical RG in the context of the dynamic BKT transition This is done together with a team of Chris foot and which is really a joy to work with them there All right, so let's get started If this works if it lets me Okay, I need to say okay, so let's start out with this here as a as a starting point this is done together with Lucas perures who deserves a lot of credit for this nice work You can take sort of this and I should also say So Some of the slides that I've seen here so far absolutely gorgeous So kind of these are some of these are still in the phase of development or so They're not quite as pretty as what I've seen this morning. For example, that's partially this is my excuse now That's partially because the preprints on here pepper and orca. They just came out today and yesterday So I'm still sort of trying but I was just too excited to share this with you guys That's why kind of some of the things might look a little bit sort of in a phase of growth. Let's say Okay, so this is what one might Think of if one if you see a quantum circuit in this kind of paradigm of discrete quantum computing you sort of set up your Quantum algorithm composed of individual quantum gaze all the usual classics here had a mass and x-gaze and see not case and so on And this is a very important and conceptually important sort of way of Depicting this it is of conceptually transparent because you kind of break down this complicated unitary into these kind of components And it was also an environment where you can make statements more easily Let's say about scaling because if you now imagine you write down a sequence of quantum circuits with Increasing number of qubits then you can at least kind of sort of make a sort of well-defined sequence of that and analyze sort of the asymptotic behavior the complexity of this of this Of this algebra So there are sort of if you if you take this too literal those sort of you might end up sort of With some challenges like for example, you might have some idle time and sort of might be an experimentally not ideal Realizations and this slide is supposed to illustrate this if you take this here small two qubit circuit like it is shown here You imagine you have a Hamiltonian composed of Some parts of the Hamiltonian that you can control and you have some parameters That's your experimental friend can implement in the lab Then if you take this literally you might sort of end up with a sequential Realization like it is shown here you make some for examples on pi over two parts here You have some sort of interacting pulse here and another power to pulse or so But this really doesn't make full usage of the platform that might that you might be working with You are in the larger space and you might be able to implement something like this here naturally We just can just in parallel run all of your control protocols all of your control Operators here sort of at the same time and now you can ask the question Is there like a is there a better more efficient implement implementation of a desired or transformation in this larger space? So that is this large kind of multi-parameter optimization problem that occurs in this context So we use some like many others we use some Optimization algorithm which I only flash here. We go into more detail later on sort of to Maybe sort of expand a little bit on this idea You imagine you have your quantum your cubits here you need you initialize them typically in a randomized fashion So you propagate this in time you read them out. You have some type of loss function For example the fidelity Compared to the desired state that the system should have after this time propagation and you take some time Do some sort of gradient descent type of approach this all leads to some type of grape and grape like sort of Algorithms and some methodologies in this context Okay, so I'm being a little bit vague what exactly we did here because I want to draw the attention to a specific point If we recall that our Hamiltonian might be of the following form here. You have a couple of Time-dependent functions here these theta J's they could be rabi like pulses. They could be magnetic fields There could be all sorts of free parameters that you might have in your in your platform And now you want to make sort of you a usable set a finite usable set of parameters out of this in order to constrain this infinitely dimension a dimensional space here and So one typical setup might be you use this kind of stepwise parameterization You break up the time interval that you give yourself into these kind of step functions And then the the pre-factor of that step function that is now one of your parameters And this is how you might sort of represent this kind of functional space by a finite number of parameters And this you also has something to do with kind of variational quantum algorithms where you have a bit of a sequential realization of operations in time What we tried out sort of and what we were intrigued by is what if you go to Fourier space now? You have a temporally non-local Parametrization of these theta functions They all overlap like it is shown here and really sort of we're looking at this at this larger space That you go away from this kind of stepwise Parametrization to watch this temporally non-local representations like it is shown here Okay, so we gave then with this in mind sort of we gave ourselves a computational task This is I think by now sort of I've seen variations of this in the number of papers Which is the following point here So if you write yourself down some quantum ising model like it is shown here. It has a transverse field Magnetic field formulae of a B X and a B Y term So it has two components and it has the ising coupling between neighboring qubits then sort of these fields together here these Control functions together make up the space of thetas Those are your this contain your trainable parameters and then you give yourself the task to Find the minimal energy of the ground set energy of some random Hamiltonian here So if you pick your Hamiltonian you pick them out of this time interval and now you want to minimize this energy by taking this Hamiltonian propagating them over a Time interval of time one sort of and think of this youth theta zero as your estimate of the ground state of Your system, so that is the computational test that we sort of gave ourselves there and here without sort of Trying to map out Too much about scaling and so on sort of let's just see how these two ways of Parametrizing your theta functions how they compare so here for example We take four qubits and in each case we saw we show you here three Optimizations as a function of the learning iterations like it is shown here. This is here for four qubits This is for six qubits. This is the energy difference compared to the exact ground state energy and sort of the first impression So this really does more like a practitioner's type of sort of impression that one has is you can see that in the stepwise optimization You keep getting stuck in these local minima here Where as kind of in these temporarily non-local in these Fourier representation you see sort of an improved behavior Simply on this practical level here So you find sort of just without sort of mapping this out in a quantitative way You find this sort of improved convergence by using a Fourier parametrization of these type of of this Of the control fields of the control functions that you're using You can also kind of scale this as a number of the qubit number here So I should mention here sort of if you look at this space of theta you have to cotton strain something right because if you For example if you allow sort of yourself a finite time interval But allow yourself to that the energy scales of your control fields of your control functions have an infinite Amplitude you can realize can and as long as your universal You can sort of realize any unitary arbitrarily close So you need to in some sense constrain the maximum of the of the allowed space of functions that you are including It is also kind of an experimentally sort of relevant constrained Because you might have just a maximum of the Rabi frequency that you can implement You might have a maximum magnetic field that you are using and so on and so on So this leads then here to this theta max, which is a global maximum of all the control function combined And basically here the punch line is said like if you set this constraint Too small then the system doesn't really find good improvement or so and here I should say kind of here We are now looking for these baron plateaus And we're looking at the variance of one of the gradients of the of the energy With regards to one of the control parameters this you can also define in different ways You could define that the global variance if you like here We focus on this one quantity if you are sort of if you give yourself Too small of a global constraints on the control parameters Then you don't see sort of good convergence So you want to be sort of around here sort of where you allow yourself enough control of the system And what people refer to as baron plateaus is that as you scale up the number of qubits Here from 2 to 8 that you see here equally spaced magnitudes Indicating an exponential suppression of the gradient of the of your energy Of your loss function like of your error of your fidelity function if you like so of your loss function, right? And this seems to be reduced in our opinion. Yes You mean the the number Okay, so so basically so you so the steps to imagine is So this year was one of the magnetic fields So for example, if we go to this parameterization You pick out one of these guys here and now sort of you look at the second order derivative of that of the total energy of the final state with regards to this Parameter that is what this quantity implies so Okay, so so you do that you do an ensemble average over the initializations also you don't like so you know Or kind of you can equally sort of do like an ensemble average over sort of your update steps your optimization steps All right, so that's what the okay. So that's the variant. Yeah, exactly. So that's the distribution that you're that you're I should Yeah, so and this behavior seems to be reduced for this Fourier for the Fourier Representation here sort of you still find a reduction, but it seems to be sub exponential We are sort of given the number of cubits kind of we are motivated and we are working on sort of improving this I'm pushing this to larger numbers So that's why at this point in time we be considered an indication like or candidate for better plateau mitigation but so in the sense that up to this qubit number we find that sort of the The reduction of the variance is sort of sub exponential That is like a slower reduction of the of the emergence of your barren platoes You can kind of imagine why this is a problem, right? So if you're your error landscape your lost landscape becomes very flat and the system doesn't know sort of how to optimize anymore because the gradient is sort of So strongly suppressed especially for growing numbers of cubits where you would where you would hope you get at least some type of Hint from the system like how to how to implement your your your algorithm Okay, so this is kind of where I want to leave leave it for the moment For the about the barren plateaus sort of we tried out this kind of alternative way of Parameterizing your sort of control functions Specifically stepwise versus this Fourier parameterization and there seems to be sort of a tendency to mitigate sort of the spare emergence of barren plateaus Okay, so this is kind of the Playground on which we wanted to expand here So that's kind of remind ourselves a little bit of these properties of optimal control in this context with the purpose of having a high fidelity of a Given quantum transformation So basically this is a typical setup here, right? You have your Hamiltonian it contains maybe a part that is not controllable and then it has here these control terms Composed out of the control functions and the control operators These operators be J here and you have a certain number they're given sort of the setup that you're considering And then this control function we break up into these control parameters these pre factors and these and these mode functions that we Consider such as in the previous case the stepwise Parameterization here that so the mode functions there with these kind of steps and All sort of the Fourier mode the sine functions would also be sort of would be an alternative to that and But it could really be any type of expansion to use here Okay, and then Okay, so they can then like a typical sort of optimization task is you choose some initial state at some time to zero Then you do two things with it you propagate it in time with your Hamiltonian and then Is one thing so a row of TS or you compare you generate the target state by applying the target operation to your initial state And then you take the fidelity of this you see how well are you doing how close to the desired state that you end up And then you optimize this iteratively, right? So this is this entire playground Sort of where sort of grape and grape like methods come in and a typical setup for this is that you now sort of modify Your Theta's your trainable parameters. This is the loss function, which in our case is just the fidelity You you shift them a little bit run your time evolution again and take this difference quotient here as an estimate for the gradient with regards to that parameter and Then sort of you do this grape up update by shifting your Theta's your trainable parameters by this gradient times some learning rate Learning rate alpha alpha grape in this case. So here sort of on this slide You see already to hyper hyper parameters You have the magnitude of epsilon here, which we'll come back to in a moment and you have this learning rate here Alpha in this case, right? Okay, so again, there's of course a number of flavors like all of these Different these different algorithms that differ in the details and the implementation and all of that But this is I think a fairly representative kind of Key step of this algorithm. Okay. I want to bring into this Discussion now the notion of response function like an all-time classics and physics. You imagine you have some system Can be anything can be atoms solids, whatever you have and you apply a probing term to it of this form here You have a generalized force f of t and it couples to some to some operator B in your system And now you pick yourself an observable that you're interested in in your system a and you ask How much is this observable changed due to this probing term? So you pick the B you pick the a in principle You have this time dependence f of t and the statement of linear response theory is that you take this Convolution of this external force here with the response function with a linear susceptibility Chi a B depending on these times here t and t prime which is essentially this commutator of the Two operators a and B in the interaction picture Okay, so this is this classic like you all know all of these kind of here connectivity Polarizability magnetic susceptibility stairs physics is kind of full of this. So it's a very useful kind of concept Okay, let's take this concept is and apply it to this optimization test that we just looked at so the way to think about it is You we have these control operators BJ over here And now we take one of these guys here and we apply this type of probe term to it We have minus epsilon. Here's the delta function at a random time t r And this is this is how we now probe it. This is now what we probe the system with So you have two random choices here that you make in the algorithm that we're using there You pick randomly one of these NB Operators and you pick a random time on your time interval here In other words our generalized force here's of the form epsilon times delta of t minus t r So then sort of in our formula We have this you propagate now this guy sort of up to the time t r and the observable that we're looking at Is the following it's really the roll star It's kind of the you take the initial state and you apply to it the target Operator the time for example see not gate or what have you sort of that's our roll star This is now place the role of the observable which we then also have to propagate into the interaction picture So here's like a bit of a longer formula. I wish I could say I'm sorry But I really just want to discuss it with you guys All right, so I'm like, but it'll be over soon. I say I don't okay So um, so if I take now this formula that I showed you there Let's just throw everything in there that we just set up We have here sort of some pre-factor the theta function doesn't matter anymore because like t r is always smaller than tf and what we have here is here's our initial state because that's what we need to trace over we propagate our probing operator forward in time to the random time Then we propagate our observable to this time sort of and you can take this entire structure use the magic of Taking traces and shifting around unitary operators and you can formulate the whole thing slightly differently Which looks like so you take the initial state Propagate it to the random time take the commutator Propagate it all the way to the rest project it on the target state this object here Kai J is equal to the Graded it's an exact expression for the f over epsilon like so you can calculate this object in a single run You don't need to have two runs, but have just a single run and it's also a hyper parameter free Representation of this gradient. So these hyper parameters in in a practical setting always I always feel like they're kind of a pain because in principle you have to scan over them and look for the optimal hyper parameter in order sort of Basically, that's another optimization test You're saving this here by having the exact gradient the exact gradient also implies that you're always walking in your gradient ascent type method you're always walking up the The mountain in the right direction and not within not within approximated gradient, right? So that's actually a limiting factor of doing grape the way I described it to you earlier Okay, so we have now this gradient like we we probe of him we probe our system We perturb our system with this operator BJ at the time TR. What do we do with this? You take this object here and we expand now this delta function in the mode functions of your Control operator. These are the mode functions f j k of t and these are now the projection coefficients of those mode functions onto the delta function And this then taken together leads to this update step That we propose to use as I'm you take these sitar's and you take the Response function at this time TR times the projection of these mode functions times on learning rate alpha zero Okay, so this can be then sort of summarized into our pepper update which looks like it is shown here sort of I'm just summarizing this here What I so described to you before here's our time interval We are and you proceed as follows you pick your random initial state you propagate it forward to the randomly chosen time PR Take the commutator and propagate it all the way to the final time Projected on the target state take the fidelity of that This gives you chi of j and then multiply your chi of j times the projection of the mode functions onto this onto this Onto the delta function here and this is how you update the corresponding parameters, okay? So that's the algorithm That we proposed and that we tried this out here in these this kind of two qubit example So here we wrote down ourselves as two qubits. We have an H x sigma x H y sigma y So this year some type of Rabi control for each of the cubits individually And we have some type of Heisenberg coupling between these two qubits. We chose ourselves a sort of a target Operation which does the C not gate and here we work then again with our sine functions So all these all of these three functions here This H x the two H x and the J here are written in this kind of sine function expansion with a finite number of modes here So the mode functions take this form and the projection on Onto the delta function take this form here So this results in this context in this type of update here The theta JK is then updated by this quantity here once we have determined the response function for a given time step So this is how this then compares So here we have our our pepper Trajectories and these are the grape trajectories here's the number of runs and this is the log of the Of the infidel sources is the mean log of the infidelity of our realization Like perfect fidelity would be minus infinity And here I should also say so our this pepper method here has this kind of Fast convergence and this floor here is determined by the numerical Accuracy of your ODE silver you can shift this around or so pick like Some others are like a lower fidelity and this one to one determines this lower floor here so indicating that there's nothing else like that you're doing wrong other than sort of the In light and the inaccuracy of the ODE silver for great sort of you see this This kind of shift here. So we picked here. We sort of tried to be challenged our method by Doing everything in favor of grape here. We took we took also this kind of multi-parameter update like it is shown here and we are showing here a bit. It's a bit of a It's more like a guide to the eye than a useful quantity But we took we show here the mean log The the the mean log sort of of these trajectories which actually favors low infidelities If you do it the other way around mean infidelity Then for example this blue line would sort of all be all all the way up here because you have here these There's a large number of trajectories that get stuck in this bad minima here and they're basically overall sort of the impression I want that we're getting is that essentially all blue curves lie essentially above all orange lies So in a given setup if you just have the two methods set up and you run one and the other one You'll always find like it that an individual trajectory Does a better performance for this kind of pepper update? Okay, so here just for the fun of it. This is what you find then for these optimal protocols for For for pepper and you can sort of visualize then the motion of the cubits here on the on the block sphere And it does exactly this kind of see-not desires, you know, okay There's what I can illustration of so so listen guys. What did you find actually, right? So, okay? Okay, so this is our conclusions on this method so we The ingredients that we have in our algorithm is that you choose this Random time random operator one of the control operators and you determine the response of the fidelity to that to perturbation of that control operator and Then you update your theaters your trainable parameters by taking this response and Sort of projected onto the mode function This is this update step here and we find that that we have both an improved convergence You have a faster convergence and also the final implementation here this floor is below sort of what you find in a typical grape realization Okay, so and this appeared on the archive this morning So I'm I'm assuming most of you read this but in case you haven't you're very welcome. Okay So like so if if any of you readers I buy your beer so like I'd be I'd be sort of so grateful Okay, so this I'm just going to Roughly click through here sort of so we're working here with Klaus and Christoph and their teams on Implementing sort of quantum gates and Rittback systems here sort of you have the user Playground of having these sort of cool sort of high energy states of Of atoms where you have this very high quantum numbers. They have this massively enhanced fundamentals interaction Into the 11th, which is sort of a unreal sort of Enhancement I would I think sort of coming from other parts of and then you put them in your tweezers array and you look for gate Realizations in this context. There may be two notable limits or so that That might be worth your consideration If you take a fairly large distance and they're fairly slow And the very fairly low sort of Rittback state then you then your gate realization is based on accumulating the dynamical phase Whereas if you take a close Distance between these tweezers and or so and a large kind of quantum number then you end up have a gate realization in the so-called blockade regime so we sort of find in our Optimization that these regimes are sort of very relevant for for them for the behavior of this Okay, so we made a model here's our 3p0 states. We take two of these as our logical qubit We have our Rittback state which belongs to the 3s1 Many-fold we have a Rahman two-fold on process between the logical qubit states here and we have a Rabi Pulse between the logical one state and the Rittback state you take some of the dissipation into account all of this is done here in an Etterbium setup Okay, I could have saved myself some time saying this and here these this is just more like an illustration I'm not going to go into too much detail For now sort of them You said get then can optimize in the space of these individual Rahman transitions here Rahman pulses and in this global Rabi pulse you can optimize this here with so realistic conditions that that we learned from our friends and Then you find that you can realize kind of a nice CNOT gate for example where you see here the learning process And this pie here is again some sort of global constrained on the system that that he needs to need to meet in order to end up in a good regime here For your your CNOT gate Okay, and you can also maybe last remark on this here. You can also check the robustness of this implementation This is sort of a typical question you might get asked What about sort of spatial fluctuations or spatial? Sort of that you that you don't quite put the tweezers at the location that you're looking at but for the more you have sort of emotional vibration of the of your atoms in your tweezers and you can check the robustness of this prediction and what Survives and there's that you both the blockade regime and this kind of weak coupling dynamical phase regime Have both a good robustness behavior, but there's a intermediate regime here We're all the energy scales the Rabi pulses and so on they're all on the same scale as the interaction And that's where you have low robustness. So that's a regime to be to be avoid that you should avoid in an experimental realization Okay, so this was here my conclusion site. So I kind of ran through this a little bit This was a bit of a speedrun of this part here just to just to give you an impression of sort of Utilizing these optimization ideas in a practical context Okay, so this next topic here. So before saying anything this is really all Lucas first credit users all the credit in the world for this So if you conceive the idea develop the idea, I was happy to and excited to help sort of but it's really all sort of his credit. It's like You should watch out for this guy. This is really Very impressive or so Yeah, okay, so we call this exclusive all-base quantum algebra you're shifting gears a little bit in case I haven't mentioned it Sort of and we call this here or come So by the way sort of we we picked up on this on this sort of trying to give like cutie names or so kind of We have now fully embraced this In presenting our work. Yeah Okay, so Let's try out the following little game here sort of Which is a little bit the starting point for establishing this method Here sort of you take the Pauli matrices like so you add the identity to it So you have zero one two three and you write those numbers in the binary Representation zero zero zero one one zero and so on right Okay, that's what we call all connotation now And now let's do that to a couple of products for example if you take sigma x times sigma y It is now this product here based on our Based on our notation you get I sigma z out of it, which is now I sigma one one You can also take this product for example and you get it means that the product of these two guys becomes This matrix here zero one out of this year or you can also here throw in a trivial one where you just multiply sigma zero one By sigma zero zero and you get of course the same matrix back The interesting thing about this in our opinion is the this the operation on these indices here Just check out these indices here and let's kind of see what happens if we look at them digit by digit, okay? So what happens there? For example here the first digit is a zero second one is a one and you get a one So right this is this guy here, right or here sort of the second one is a one The first the second of the first operator is a one and the second one is a zero gives you a one Right, so this is this line here, right? If you have two ones here that are collided you get a zero if you have two zeros here You also get a zero in other words the indices in this binary representation follow an exclusive or sort of logic table, right? So this is the starting point for this methodology So Which is basically kind of cute like so like I'm not sure this has been okay, so Then sort of there's a second part of this yeah, we have to take care of these pre factors here Right, we have this I or minus I or one or so and we write this in the following way So first of all here we wrote our sq2 algebra If you now have a general Product of these four operators here You get sort of here sigma j x or k right sort of in the sense of having your binary numbers And you're operating on the x or digit by digit And you have the structure factor as we call it yes as a pre factor Which we write as I to the something as a product type of form and this bjk Sort of you can make a lookup table like it has shown here that sort of gives you the right pre factor over here Okay, so Okay, so so this year I should mention maybe sort of these are all very computer friendly operations, right? Taking x or operation digit by digit on binary numbers perfect, right? Also this lookup table you won't need anything more than a four by four lookup table Even if you scale this up to large tensor products So these are all sort of very computer friendly operations So for example, you can extend this to tensor products of These matrix of these Pauli matrices for example here's now a four product like four of four Sigma matrices here's another one what you now do is you write down a multi index you write down our Orca Notation and you make a you concatenate them you just write them one after the other giving you the multi index Pointing towards this tensor product of sigma Matrices similarly here's another one. This would be this multi index here Now you take the x or sort of line by line So it does here digit by digit like it has shown here giving you I x or J This gives you here the right sequence of Tens of product of sigma matrices and then you have to look up sort of your structure factor this exponent be here sort of In pairs of tools and them this year Then you put here as the exponent in your eye to the farm in this case to the power of two So this is then how it scales up in a sort of basically ideal way sort of you know sort of you basically You're simplified instead of taking Products of matrices of very large matrices all you do is like some index algebra, right? It's essentially Sort of this is essentially an ideal speed up in this context It's a little bit reminiscent of doing pointer algebra in a in a computer program, right? Okay, and the plan is to use this structure now to have a sort of good sort of simulation tool of quantum dynamics in this case here On systems that can be thought of being Composite to level systems. So here's a little bit our generalized structure So if maybe in the interest of time, I'm going to sort of keep this short But sort of what we want to do is we now write both our density matrix here in this kind of space of tensor products of Sigma matrices like it is shown here. So now we have broken this up into these row eyes. These are now Sort of the coefficients the real value coefficients of these tensor products here and similarly here the Hamiltonian We also write this in this way, right? Okay And sort of we looked at a couple of difference equation of motion like so to describe different types of dynamics For example, you can have von Neumann and master equation or you can also look at the imaginary time evolution here Which is then you can translate this here line by line in this formalism that I described to you in this notation Okay, so to describe to you This is really the delays So this is then the algebraic structure, this is the algorithmic structure of the algorithm that we're using there You have a dynamical list your density matrix here first is here the super index This is now written here as a decimal number But you should think of this here as being a set of Binary numbers like I described to you these are the indices and these are the corresponding real valued numbers here And it's dynamical sort of the size of this list you will change in time in general This is the size of the list similarly. Here's your Hamiltonian, which we also break up in this in the same way Here's the multi index for this component and here the real valued components of your Hamiltonian Give it with a size h like it is shown here and this here now You can use to generate sort of the update step there of your time integration You take the x or for these multi indices you you look up your structure factor and this then generates the The time evolution the time step of your time evolution So first calculate this then you want to then you can update your your row according to this in the example That I'm going to show we use some longer putter of fourth order or so and then kind of in order to keep the system Feasible you have the option you don't have to but you can sort of truncate your sort of expansion or your representation Throwing out all of the all of the components that you found there that are below a certain threshold epsilon So that keeps the system that keeps the description sparse in your Okay, so we applied this here for quantum annealing to quantum annealing for the maximum independent set problem You can see we are a little bit influenced by sort of Working with Rittberg so so where this is sort of discussed as a possible application So the in a nutshell basically what you imagine you write yourself down some some graph here And you want to you imagine you put like a unit circle around each of these sites here and you don't want to read Errors in the same unit cycle. That's a unit circle. That's that's not allowed And then the question or the the maximum independent set problem is to find a set up or all setups In which you have a maximum numbers of red Vectors here Okay, and we applied sort of both versions of it. There's both the real time and the imaginary time Evolution great. Thank you Imaginary time version of this here So this is I'm gonna skip this here and you find there's some good sort of performance in identifying The maximally independent sets here both for the imaginary time with the real time Like it is shown here Okay, so maybe sort of Just to comment on this briefly here What we find for our algorithm here is the CPU time This is actually just done on the on the laptop even though you have up to 22 qubits or so Here you'll see the effective size of your computational test Size of your density matrix list and size of your Hamiltonian list that is shown over here These are then different runs or so and you see an approximately linear scaling or so so we are trying to learn about hash maps in What exactly is realized in a in a MacBook or so? Because that appears to be the remaining constraint or the what which might actually lead to a small logarithmic Dependence on top of this. Okay, so in conclusion on this here. I showed you this idea of Orca sort of where you introduced this or connotation and you find this kind of efficient way of taking products of Product of tensor products of Pauli matrices Reducing it essentially to taking the exclusive or on the index of replacing a Matrix multiplication with an index operation And so with this then add to this and also this the structure factor term Basically here write the spin algebra as an index and pre-factor algebra then you can take this set up and use it to for the some simulation of quantum dynamics like I described to you here and be used The maximally independent set problem as an example. Okay, so again out of time So I'm just gonna skip through this stuff here kind of even though it's dear to my heart or so but yeah, we can talk offline if you if you would like to Chat about this here. I'd be I'd be very excited to do that And let me just go here to our conclusion pay So in conclusion here these first three topics here Centered around sort of optimal control and optimal implementation of quantum Quantum gauge and quantum algorithms and in quantum systems So it has this optimization test associated with this and I would like to emphasize this kind of new sort of Algorithm that you put forth which is our our pepperon method here then this year is our sort of approach to Simulating quantum dynamics on the as on a classical computer Which sadly is all we have these days which we call the orco method I didn't get to talk about the dynamical BKT transition and here. Let me throw up the The reference to to this here and let me thank you so much for your attention. I Can Really what are you exactly? Thank you for the very interesting talk I wonder whether any work has been done to look at whether an X or Representation or something like it would work beyond just SU 2 looking at D dimensional quantum systems or say fermionic ones for example awesome, awesome. Yes, so we've We've had a similar thought or so like and I we can chat more about this Yeah, so there are extensions of this year to other types of operators We like so we have something in the pipeline sort of where you do a bosonic thing where you have a harmonic oscillator Where you have a where you can find a similar index algebra We are playing with the fermions or so kind of you can chat about this and get you involved or so. It's like and sort of Yes, like so it's a very natural and very good questions for like to how to expand this towards other degrees of freedom or so and Yeah, I think there I think something should should be possible. Let's put it like this. You like what? Appreciate the question. Yeah, I should really emphasize this year kind of especially on these methods or so if there's if you find any of this you're mildly amusing or so sort of Send me an email chat me up or so and we get you involved so I'd be excited to be Yeah, you can you can add some pepper to your life or write an ochre. So, you know, it's all there for you Didn't hear the one word sort of Okay. Yes. Yes better when you When you expand the controls in free space Do you introduce yeah cut off on the number of Fourier modes which are involved in the expansion, right? Exactly. Yeah, and did you check whether this mitigation effects depends on the number of free modes you keep There is so more modes give you better implementation. So it's it's Essentially if you look at the comparison that I showed you there's like so like you you're roughly sort of Re-shifting both of the curves both for the stepwise and the Fourier in a similar way at least by eyesight like maybe we could Scan this a little bit in more detail. So You have sort of I think you have end up with the usual optimization thing that You might have more challenges in the convergence that might be a bit slow down It has to scan a larger space You might end up with a lower floor because you give yourself the opportunity to have a better implementation So that's as a quality of level is a great question or so kind of We have some plot and the appendix on this I think but it's We can chat about more on this, but as a tendency, that's the global tendency that we let me see him Is any more questions? Yeah. Yeah, thanks for the nice talk. So I wanted to ask about the orca the So there are sort of two ideas here, and I'm not sure if they're both crucial for the same things but what one is this sort of Basing things on poly strings and how they couple and it seems like I could do that whether or not I wrote use this X or stuff right like this this seems to be what would give you the real sort of scaling advantage that you might be seeing and Then it looks like the X or stuff is just to make it really fast on a computer Is that right or is there some more fundamental thing there that I'm missing? I I think that's I think that's fair sort of And I can live with this description making something fast on a computer. It's not a not a it's not a dirty thing, right? So no, it's great. It's great. I just making sure conceptually. I understand It's like I think it's this this ease of usage of identifying I think what I think you're 100% right that you could in a similar way So take the X and the Y and make a look up take a table to get the Z But so if this is so such an easy sort of implementation, you know, that's like let's just go for that You know, so it's perfect and then I didn't really understand why you were getting linear and time Scaling I naively would not have expected that right. So I should so basically To maybe to Reintroduce the notation so so this is the length of the of the list of the your density component That's your major components. This is the length of the Hamiltonian and basically what's on the x-axis there That is this that is the product of these two guys kind of so that's that can be a large number and also kind of Especially for example, if you for example sparse problems are essentially ideal for this Basically, you're you're also we are also not beating quantum supremacy or so if your role is not sparse And this is scale I to to the end for example, you know that that I See some nodding so life is good. Yes. Appreciate the question Okay, I think we should stop here and thank you very much Ludwig again for your talk