 Okay, Nomura from Tohoku University who will be telling us about quantum many-body simulations Using artificial neural networks, please. Thank you very much for the kind introduction So my name first I'd like to thank the organizers for giving me this opportunity to present my work So my name is Yusuke Nomura from Tohoku University And today I will talk about quantum many-body simulations using artificial neural networks so collaborators listed here So actually from April actually I moved from KO University to Tohoku University So now our university is located at this Sendai and we have good food good sake So if it's also our group becomes big now we have our group have seven rooms and And in total we have more than 300 square meters But now the member is only me actually only one member for 300 square meters So actually I would like to welcome you to visit our group because we have much enough space Okay So in Konesu matter physics, so let me start so in Konesu matter physics the solving quantum many-body problems is Very important topic because materials themselves quantum many-body systems Then quantum many-body problems are written like this formula the formula is very simple Now we have Hamiltonian and what we want to do is to obtain the eigen state and also the eigen value of this Hamiltonian But the problem is that this Hamiltonian is very big the dimension of the Hubert space is exponentially large so if we want to solve this in the quantum Sorry, sorry the classical devices, so it will take Exponentially large computational cost Then what we usually do is to approximate quantum state in some functional form But in the strongly correlated systems Because various quantum phases compete with each other within very small energy scale Then in the approximation if we put bias from some human insight then it may result in long understanding or a prediction This is why I am interested in machine learning or neural network So by using machine learning or neural network, we may be able to mitigate Unnecessary biases in solving quantum many-body problems Okay, so let me go to the detail topic so now we Talk about three topics first one and the first two a variational method for zero temperature and the finite temperature method Finally, we move on to the quantum to classical mapping Okay, so let me start from the variational method at zero temperature So again what we want to solve is this Quantum many-body problem Then the problem is how to approximate this quantum state Then for simplicity let us consider the spin one half quantum spin system Then what we do in the neural network is to connect the input input is the spin configurations and the output is the wave function and In the Intermediate step we put neural network to connect this input and output Okay, so here. This is an example of the restricted Boltzmann machine So the restricted Boltzmann machine consists of sigma spins and this correspond to the physical degrees of freedom and Also the hidden spins and this is the auxiliary degrees of freedom Then both sigma spins and h spins ising spins Then we define the Boltzmann weight for this extended ising system based on the magnetic field a and b and the classical interaction w Then once we define the Boltzmann weight, then if you trace out h degrees of freedom we obtain the wave function like this So in this case if this variational parameters a w b Reel then this Boltzmann weight is positive and real then the resulting wave function is also positive But in the general case the wave function can be negative or even complex So in this case we need to extend the definition of variational parameters to be complex variable So in this setting the quantum correlations are taken into account by the connection to the hidden degrees of freedom Also a good property of the Neural network is that we have a property of universal approximation meaning that in the limit of an infinitely large number of hidden unit We can represent any kind of wave function Okay, so we also have we do not also have Interaction among h spins then we can analytically trace out h degrees of freedom Then we can obtain this formula for the wave function Then by using this formula we can efficiently compute the wave function amplitude Then this wave functional amplitude actually depends on the variational parameters a b and w So the problem is how to optimize these variational parameters So let me show a very simple demonstration using the one-dimensional anti-ferromagnetic Heisenberg model So if you want to approximate the grand state Because the grand state is the lowest energy state So then in that case we can take the energy as a loss function Then by me by minimizing the loss function we can approximate the ground state So then what we do is first we put initial variational parameters using small random numbers Then by minimizing the energy or the loss function then the optimized variational parameters look like this So here we see that the variational parameters has a sign change between here and here So it means that RBM land land Anti-ferromagnetic correlation in this system Then the resulting wave function reproduces the exact wave function like this So here the system size is super small So now we can we can compute the wave function amplitude for all the spin configurations But even when the system size becomes large we can compute the energy expectation value or energy derivative by using the Monte Carlo technique Then by easy by combining with the Monte Carlo technique we can apply to the larger system size So this is basically the variational method using the neural network Yeah, so this kind of neural network variational method is first introduced by Korea and Victoria in 2017 for the spin systems without frustrations And since this method is very new. So first we need to do the Benchmark calculation to check the accuracy of the method But through various development and the benchmarks We start to understand that these neural network quantum states are indeed powerful Then what we should aim for is to apply for a challenging systems For example frustrated spin systems or fermion systems in which quantum Monte Carlo suffer from the time problem Okay, then let me start from the extension to the fermion systems So please refer to this paper if you're interested So in this case we apply to the two-dimensional about model So the two-dimensional about model is a very fundamental model That may be important for describing the high TC superconductivity incubate then Our idea is to combine the neural network With the fermion wave function So here RBM here neural network part is based on the RBM and It is combined with the with the fermion wave functions Because the fermion wave function is a kind of straighter determinant. So it is anti-symmetric So by combining the anti-symmetric wave function and the symmetric wave function Then in total wave function becomes anti-symmetric So then we can apply to the fermion systems Also, it is interesting to rewrite the problem like this So here in this setting so we approximate The exact wave function divided by the fermion wave function by the neural network So it means that if this fermion wave function is sophisticated Then we can easily approximate this quantity by the neural network Okay, so this is indeed the case So here we show the benchmark study using the 8 by 8 square lattice at half-filling So here in the half-filling case we can apply the quantum Monte Carlo method without sign problem So we take the QMC energy as a reference Then here we show the relative error on the energy of the ground state for the Hubbard model and Here we basically Have three curves. So here the RBM is combined with the Fermi C state So here we combine the RBM with peer product state and the finally we Symmetrize the peer product state to become to make it more sophisticated Then we can see that the accuracy of the wave function drastically improves by combining with the sophisticated fermion wave function Actually, this peer product state is kind of an extension of a SRETA determinant and it is very flexible For example using the peer product state we can represent the superconducting state or the anti-fail magnetic state so this results suggest that by Incorporating the quantum correlations using the peer product state then we can help the neural network part and The good point of this combination is that we can reduce the number of variational parameters To get high accuracy and actually this is helpful when we want to apply to larger system size Okay, so then we move on we next move on to the application to the frustrated spin systems So here our purpose is to go beyond the benchmark and We applied to the challenging Frosal systems and we try to review the physics in the Frosal system systems So here we apply our method to the anti-fail magnetic J1 J2 Heisenberg model on the square artist So the Hamiltonian looks like this here J1 is the nearest neighbor spin interactions and the J2 is the next neighbor spin interactions So here the Hamiltonian is very simple But because J1 and J2 compete with each other it gives non-trivial physics So what we know for this model is that when we fix J1 and change J2 Then when J2 is small We have this kind of nail type long-range order On the other hand when J2 is large we have this swipe type phase But in between these two phases the frustration effect becomes strong Then Interesting phase like quantum spin liquid may emerge, but although the Hamiltonian is very simple But Because of this J2 Time we cannot apply the quantum Monte Carlo because of the sine problem then For the zero temperature the zero temperature phase diagram of this model had not been settled Then we apply this RBM plus PP wave function to this J1 J2 model So here again, we combine the RBM part with the fermion wave function But here it is a quantum spin systems So we need to map on to the subspace of the quantum spin systems So we for this purpose we put the goods viewer factor here for the pair product state then actually this Pair product part can represent for example the resonating balance bond type wave function So this pair product part is already powerful and that's for the quantum spin systems So by combining the powerful wave functions, we can make the wave function even more powerful Also the quantum state at the finite size systems are labelled by the quantum numbers so here we Apply the total momentum and the spin polarity projection to symmetrize the wave function and Actually a good point of this Projection is that we can not only compute the ground state, but also we can compute the excited states Okay, so in this model The ground state is total momentum zero and the even parity Spin parity sector, but for example if we compute the finite momentum state It corresponds to the excited states Then we start from the benchmark study So first we apply To the ground state using the 10 by 10 lattice at J2 equals 0.5 so At the time of 2021 our best our method gave the best energy among the compare method Also, we compute The we check the accuracy of the excited states For the six by six lattice So here this is a triplet Excitation with momentum pi pi and here this is a singlet excitation with momentum pi zero and Compared to the exact diagonalization result We see that we can accurately compute excited state So now it is 2024 now actually The accuracy of the variational method has been improved So recently by using a kind of transformer method Actually the record of the best energy is kind of updated and this kind of improvement can be achieved by the The development of efficient Optimization method called mean SR. So if you are interested in please have a look at this paper. This is very nice I think Okay So then now we can compute the ground state and excited states in a very accurate way Then to study the zero-temperature phase diagram we combine the two independent analysis One is from one is from the ground state and one is from the excited states So the ground state analysis are kind of more conventional way by approximating the ground state Then we compute the correlation functions Then by studying the correlation function. We study the property of the quantum phases. So this is kind of conventional but we can also infer the phase diagram by the Excitation's so this is because the ground state property and the excitation structure correspond to each other So this means that if by looking at the change in the excitation structure, then we can study the Ground state of phase diagram So what I meant is that here for example the excitation Structure changes at this point So here the lowest excitation is triplet in this region But the lowest excitation changes to the singlet then by Looking at this crossing point and by looking at the system size dependence of this crossing point then we can Estimate the phase boundary at the top some dynamic limit and actually this crossing point Correspond to the phase boundary between the quantum spin liquid phase and the balance bond solid phase So we know that in the balance bond solid phase the lowest excitation is singlet Then by performing two independent analysis like this, then actually these two independent analysis gave the consistent result Then we finally obtained this kind of zero temperature phase diagram So here we have nail phase and the stripe phase For small j2 and the large j2 and in between these two phases We have finite region of quantum spin liquid and the balance bond solid and This balance bond solid is a symmetry broken state But the quantum spin liquid phase is more exotic phase We do not have any symmetry breaking and the quantum spins fluctuate even at zero temperature now because we can compute the excitation structure by our method We study the excitation structure for this quantum spin liquid phase so actually in the usual conventional anti-ferromagnet like a nail type anti-ferromagnet aura we have non-Goldstone mode at the ordering vector pi pi But in this case The excitation structure is more exotic because not only zero zero and pi pi But also pi zero and zero pi becomes gap-press And it is predicted to have a direct type this person but this kind of unusual excitation structure Suggest the existence of fractionalized excitation called spin on and also spin on is predicted to have Direct type dispersion like this So this suggests that this quantum spin liquid is no doubt so our method Review the no doubt spin liquid with fractional spin on Okay, so this is our result But because this kind of model this J1J2 model is a very fundamental model So other people also study this model So this is the result by the DMRG And this is the result by the variational Monte Carlo method and this is our method and this is based on peps so here there is quantitative discrepancy in the phase boundary, but at least we have a kind of qualitative agreement In that the quantum spin liquid phase exists for finite J2 region So this kind of cross check is very important and actually in future to rebuild the phase diagram of for example the Dove-Tabban model actually the cross check among the various variational method is very important So given this situation a common metric of variational accuracy is highly important and To construct a common metric of variational accuracy we recently performed the international collaborations So if you are interested in please have a look at this paper Okay, so to summarize the first part So the neural network quantum state if properly constructed it can give the state of the accuracy Then we can study the interest in physics using the neural network quantum state Okay, then we move on to the second topic and the extension to the finite temperature simulations So the finite temperature simulation is more challenging because we need to take into account the thermal fluctuations on top of the quantum fluctuations But unfortunately for the two-dimensional spin systems We do not have many powerful method for finite temperature simulations Then what we try to do is to propose a new method based on the neural network Then we use the idea of the purifications So in the purifications the finite temperature density matrix is mapped onto the pure state of the extended systems Then if we trace out the unseal degrees of freedom, we get back to the original density matrix Then our idea is to represent the pure state of the extended systems by the Boseman machine like this So here sigma is the spin system spins and the sigma prime is answer spins Then the correlation among sigma and the sigma prime take into account by the connection to the hidden degrees of freedom Then if we trace out this sigma prime answer a degree of freedom Then we can simulate the finite temperature property of the original system Then in this method first we prepare the purified infinite temperature state so now for example the purified infinite temperature state can be prepared by Preparing preparing the maximally entangled state between sigma and sigma prime spins like this and Actually this infinite temperature state can be exactly represented by the Boseman machine With a complex weight I pi over 4 this So this is the initial state Then by starting from this infinite temperature state Then what we do is the imaginary time Hamiltonian evolution Then we can study the finite temperature state Then to approximate this imaginary time Hamiltonian Evolutions we use the stochastic reconfiguration method so in the stochastic Reconfiguration method we try to approximate the exact imaginary time evolution As accurately as possible within the representability of the variational answers And then by using the stochastic Reconfiguration method we can approximate the imaginary time Hamiltonian evolution Then we apply this finite temperature method for the two-dimensional J1 J2 model So in the first part we studied the zero temperature phase diagram of this model But now we studied the finite temperature property of this model So here we show the benchmark result using the six by six lattice So here this is the energy space key feed and structure factor and this is temperature and We study two cases one is our first rated case J2 zero and the first rated case J2 is 0.5 And the symbols our method and the solid curves are the exact references so by Comparing to the exact references we see that our method Accurately reproduce the exact result and actually this exact result is obtained by a kind of Rancho's type method And then the computational cost scales exponentially with respect to the system size So we cannot go to the larger system size using this kind of Rancho's type method But compared to that our method case The competition computational cost is reduced to the polynomial with respect to the system size Then by using our method we can simulate a larger system size okay Then finally we move on to the quantum to classical mapping method Okay, so far We perform the approximation of imaginary time Hamiltonian evolution Using the stochastic reconfiguration method again using the stochastic reconfiguration method We try to approximate the exact evolution as Accurately as possible within the representability of variation on this and actually we apply this kind of Valational variational type approach using the mainly using the shallow network like rbm This is because the new medical Optimization of variational parameter is rather easy for the shallow network But what we found is that if we introduce the deep neural network like a deep wasma machine So here this is the structure of deep wasma machine So in the rbm we only have sigma and h spins But compared to the rbm structure We additionally have the deep layer in this deep wasma machine Then the representability of the deep wasma machine is much more flexible than the shallow network of rbm Then what we found is that we can Exactly and analytically Reproduce the short time imaginary time Hamiltonian evolution using the deep wasma machine This is because we have much more flexible representability compared to the rbm Then what we can do is first we prepare the initial state by the deep wasma machine Then we apply the imaginary time Hamiltonian Evolutions And we apply the Suzuki-Torota decomposition for that then We can reproduce this decomposed imaginary time evolution exactly by the deep wasma machine so then in the Limit of long time imaginary time evolutions then we can eventually obtain the ground state So here in this method compared to the stochastic Reconfiguration method where we do the numerical optimization of the variational parameters We do not need to perform the numerical optimizations We can Do everything deterministic Then this mapping is exact up to the Torota era So let me show an example using the one-dimensional transverse feed ising model So the Hamiltonian can be decomposed into the interaction part and the transverse field part Then the problem is how to Reproduce this short time propagator by the deep wasma machine so for the Interaction propagator what we do is to add one shallow degrees of freedom Then put the new bond like this Okay, so this is already interesting because what this means is that to Reproduce the classical propagator The shallow network is enough. We do not need the deep neural network but on the other hand To reproduce to exactly reproduce the quantum propagator like a transverse field propagator Actually the deep degrees of freedom play an important role So in this case by putting deep unit and the heat shallow unit Then we can reproduce this transverse field propagator So now we have a method to reproduce this short time propagator then by Successively applying this short time propagator then we can analytically construct the Deep wasma machine which represent the ground state Okay, so this is the constructed deep wasma machine So again, we do not need to do the numerical optimization of the variational parameters All the parameters in this deep wasma machine can be determined by some analytical formula then Number of hidden and the deep unit becomes proportional to the system size and also the imaginary time length Okay, then once this deep wasma machine is constructed we can compute the physical quantities by doing the Monte Carlo sampling of the deep spins hidden spins and System spins of this deep wasma machine So this deep wasma machine is a cross cow ising system. So this is a ising spin system So in this method we map the quantum state to the cross cow ising systems so in In this sense this framework Provide a Nobel quantum to classical mapping and Actually, we can show that this kind of mapping actually incorporate the path integral formalism And in the specific construction for example in this case actually this correspond to the path integral formalism then Actually the interest in point that is that now we can connect the concept of deep layer in the deep learning or the machine learning To the imaginary time direction in the statistical physics. So this kind of connection is kind of interesting to me Then finally we compare this approach to the variational approach So this in this quantum to classical mapping the good point is that we do not need to the Need to perform the numerical optimization of the variational parameters All the parameters can be given deterministic way But the problem is that for computing the physical quantities actually we need to do the Monte Carlo sampling for the sigma spins And also the hidden spins Then in this case actually as in other Similar to the case of path integral actually if we want to apply to the frustrated spin systems Actually The sign problem will occur when we compute the physical quantities Then due to the severe sign problems We cannot apply this quantum to classical mapping method to the frustrated spin systems On the other hand in the variational type approach We need to do the numerical optimization of the variational parameters, but we can apply to the frustrated spin systems So finally as a side remark using this method actually we can Exactly map the quantum start quantum circuit to the deep bosma machine So it is well known that the quantum circuit can be represented by represented by the tensor network But actually the quantum circuit can also be represented by the deep bosma machine Okay, so this is a summary. So finally we talk on the talk on the quantum to classical mapping method This is nice because we do not need to perform numerical optimization But when we apply to the frustrated spin systems the sign problem will occur Then in this case using the variational method We can study the zero temperature property and the finite temperature property of the frustrated spin systems And this kind of topic is covered by this label paper. So please have a look at this Then this is the final remark With that I'd like to thank you for your kind attention. Okay. Thanks a lot for the very clear presentation We have time for questions Thank you very much. I have a comment related to the second part of the talk about the finite temperature computation Do you know what is the scaling with the system size of with the volume of the system of the resources that you need to Have in order to keep the accuracy fixed Yeah, that that is a Interesting point, but the problem is that for the finite temperature simulations Actually, we do not have much Reference to check the accuracy. I mean in the ground state property even when the exact energy is not known for example, the energy variance Tell how close to the ground state but When we want to apply to larger system size My problem is how to define how we do not know how good the method is because we do not reference J2 equals zero we can we can have the quantum Monte Carlo In that case as for the moment, I do not study the system size Dependence of that. Yeah Yeah, this comp. Yeah, I agree this computational cost depends on how large we need to introduce I Mean number it this depends on the number of hidden unit So if we need to introduce much large number of hidden unit, then this Computational cost grows much more. Yeah. Yeah, thanks for the talk I would also have a question to the computational cost here in that case. So If you only have a polynomial cost compared to the exponential So to say cost of like our like complexity also of quantum mechanics, right? You have to put the complexity somewhere in some knowledge about the state or in some knowledge about the system So so how do you achieve in in this simulation technique here? Basically that you can only have a polynomial cost even though the problem is still exponentially complex Yeah, yeah, so Again this this kind of scaling is kind of for the moment it is heuristic and we do not have I mean the background for this I mean even in the grand state Approximation We do not know how large we need to introduce this hidden unit to reach the good accuracy right, so Yeah, the answer is we do not know the scaling. Yeah, it's another question here I thank you very much for the nice talk on the nice results I have a question regarding the first part of the talk Have you compared or is it possible to pair these two ideas also with other? Newly quantum states like I don't know RNNs or so have you tried it and is it an advancement or disadvantage? Yeah, I haven't tried but it is possible But I have I just do not have I just I haven't tried Yeah, of course we can think of other Architectures and but so far we do not know the best architecture for the neural network But this should be investigated in more detail, I think In the last part of your talk you mentioned this Mapping to the path integral So so you mentioned as in those if understood correctly you cannot avoid the same problem, right? So how the same problem manifest in this mapping I could not see yeah, actually so So let me start from this To the variational method in the variational method we can just take The wave function the square of the wave function amplitude as a weight then this is in principle positive but Then we only need to sample the sigma spins or system spins, but in this case Actually, we need to do the Monte Carlo sampling also for the hidden degrees of freedom not only the sigma spins then Actually when we apply to the frustrated spin systems, then this bond becomes complex then if we Also do the Monte Carlo sampling for also for this ordinary degrees of freedom then the negative sign will appear But for our frustrated spin case the order bond here becomes real and then Actually, the order weight becomes positive then in that case as in the case of passing integral We can do the Monte Carlo sampling for that case For the nice talk I have a question about the last part of your talk when you mentioned that you can map quantum circuits to deboltsman machines. Can you tell us a little bit more about? Basically, what are advantages and what are disadvantages of this approach and When one could expect, you know, the Boltzmann machine approach to work better or to give some something new Yeah, actually for the moment that there is no advantage Unfortunately, there's no practical advantage. This is Yeah, actually, so this is kind of for example CZ gate. It is like Not the imaginary time evolution, but this is a two-spin operator Then we can using the same technique. So here we also apply to this kind of propagator working on the two spins Then by using the similar idea, we can map to the deboltsman machine But unfortunately even in this case the bond becomes complex Then practically, there is no advantage, but if we can do some mapping We can have a mapping to more compact network with a more shallow neural network then Without losing fidelity then we may have some advantage, but for the moment there is no practical advantage for that Yeah, so using the Using the similar technique, we can also perform the for example real-time Hamiltonian evolution using the same technique, but again the bond becomes complex in that case Then the sign problem will appear. So for the moment, there is no practical advantage Thanks for the nice talk. I was wondering do you see any Applicability or challenges of your RBM plus PP and that's for fermions in the real-space formulation So since it seems natural to describe superconductivity with that Yeah, of course that don't have a model is one. Oh, I mean Yeah, but I mean like if you if you look at electron gas type models or continuous Electron gas means you mean the continuous. Yes, exactly. Yeah. I Haven't thought about that, but Yeah, there are several way to do that first we one way is of course we Introduce some Basis set then we mapped on to the second quantization form then we apply this or Yeah, I need to think but I Yeah, the answer is I need to think about how to apply. All right. Thank you I mean, it seems like you could, you know, write your pair project wave function, maybe with orbitals and Orbitals you could write it in real space, which yeah, okay. Thank you. Yeah, unfortunately I don't I do not have as I said I only have one been buying my groups or Please collaborate Maybe one last question if there is If not, let's thank the speaker again. Thank you very much