 So which is going not surprising going to talk about learning okay quantum systems, so please I hope you can hear me now Okay, thanks for the introduction. Yeah, first of all, I want to thank the organizers for inviting me to this beautiful place. I'm I'm really enjoying this conference a lot because Okay, my title I tried to connect my title so to say to the conference so it has quantum in it It has learning in it. I mean, I'm an experimentalist coming from Vienna now and first of all, I mean I Cannot do machine learning in a way because I'm not used to using a computer. I like to use a little Quantum systems that we built in the lab But in the in the recent time we were thinking a lot about like how to use maybe machine learning and how to maybe Learn more about quantum systems. Yeah, and so this is a bit the story that I want to tell you here today So and I picked out two learning tasks I mean we are learning every day in the lab a lot But you know like two specific learning tasks I took out here the first one is a bit more on the experimental slash technical side It will be about like how we built a little physics inspired model to produce Optimal optical potentials with the help of a digital micro mirror device very much in the spirit of what Julian was talking about two days ago Like to imprint optical potentials to clouds of ultra-cold atoms And the other learning task will be about More on the quantum simulation side So I will I want to show you a very recent project of ours where we thought of like doing Hamiltonian learning so we heard this term also a lot this week already But I think I might give you another spirit on to that because I want to show you how you can learn Effective field theories from measurement data in your lab Okay, so we are asking the question. What is an effective Hamiltonian describing this system? Okay, but let me start with the first part Okay for this I will introduce to you our experimental platform that we have in our laboratories in Vienna So what we are using is a so-called atom chip. This is basically a microfabricated structure through which we can send currents oscillating or DC currents basically to create magnetic fields and with the help of these magnetic fields We can trap ultra-cold rubidium atoms below the surface of this chip and our speciality So to say is to generate To thin let's say call them quantum wire. So basically we are creating Two one-dimensional boson Einstein condensates of rubidium 87 below that atom chip Where you can explore the physics that's coming from the onsite interaction so the atoms are interacting repulsively with each other and the tunneling between the two wires, yeah and So how you can think of this is basically what you have you have two boson So quasi-boson Einstein condensates because we are in 1d. So there is no strict condensation So to say which we want to describe in terms of the relative degrees of freedom So I already told you we have some tunneling going on that couples basically did the two wires which each other So you want so what we are interested in is the relative degrees of freedom. So the relative density fluctuations What we call pi and the relative phase fluctuations, which we call fine Yeah, and later in my talk I will talk a bit more about the physics and how we can deduce the effect of physics describing the systems but what is now relevant is that like Usually creating these systems here below the atom chip what happens in this long in this longitudinal direction in the 1d Direction is that we create trapping potentials which look like that That's something you have often in in the lab if you create a trapping potential Is it from magnetic field or an optical dipole beam? what you have is you have harmonic trapping confinement and This is in a way not what you want because these systems then feature spatially dependent Atomic density and this leads to spatially dependent interaction and so on so what you what you would like to have is for example A flat box potential or at least you want to have control over the trapping configuration in this longitudinal direction so What we what we set out to do and we were not the first to do so there's many lab many people doing this For for lattice systems as well as continuous continuous systems as we have them in the lab is So basically what you do you have just you have your cl- sorry Okay, I Yeah, you have your cloud of ultra cold atoms and what you can do Additionally to the magnetic fields you can shine in basically light fields from the side with this create optical dipole potentials to compensate for the harmonic confinement and This was a setup we came up with so what we have is we have a digital micro mirror device So this is basically a pixelated like matrix of small mirrors that you can switch in an off or an on position And with this you can create basically arbitrary light patterns by shining a light source So for example a laser or here in this case as an SLD onto that thing And then what you do is you image the light that you create basically the light pattern that you create in the plane of the Dmd you image with some lenses does not does not matter the details for you now here Onto like the atoms or like in our case here. This was a test setup onto a CCD camera Okay, and then maybe you also want to do something like an optimal control on of your pattern So you want to give some feedback and modify the Dmd pattern to create some optimal optical potentials Just that you see that we can do like really what we want So this is one of the patterns that we created. So this is the logo of our home university So this is always a bit self advertisement here. Yeah, so you can see we can do really arbitrary patterns Okay, so but now question you told us you can do everything, but why do you want to do machine learning? Okay, or why are you doing machine learning? So, okay, what you usually have you have some input This is in this case a pattern that we put onto the Dmd Then you have this then you have a system which basically Processes this input and in our case. This is for example the optics that we use to image the Dmd pattern onto the atoms And then what you get you get some output and this you can compare to some target For example, what you want is you want to have a flag flat box trapping potential. However Unfortunately, word is not perfect. I mean most likely this is a good thing that word is not perfect But okay here in that case It's a bad thing because we you don't end up with a box, but you have some residual like problems So to say so what you want to do you want to calculate the error between what you get out and the target And you want to do some feedback Yeah, so however in our case what we wanted to have is we wanted to have a trained model Representing the system including all its imperfections. Why do we want to have this? Because our experimental system is usually slow So our machine gets like a picture or like a realization of the quantum simulation So to say every 30 seconds So this means optimizing that thing over hundreds of iterations takes a lot of time Yeah, and you would like to spare this time because if you can spare the time of optimizing an optical potential You can do some very interesting quantum simulation in the same time So that's why we were aiming for a model to do this optimization of the of this of this optical potential So to say offline so without using the actual system Yeah, and so now basically we asked the question how can we generate an experimentally Efficiently trainable machine learning model. Yeah, and this is what we in the end called physics inspired learning model Which is a quite fancy term for something very simple, but you will see it in the next slide, okay? so In a way, what did we do so we thought of what is happening in the system? Yeah, and what is happening so you have a 1d virtual input in our case 1d is enough because you have a 1d system and you You will get some optical potentials as an output. Yeah, and now the question is how do you set up the model in between? So what we first thought of okay? Let's put a layer which basically represents the shape of our beam So we don't put already a homogeneous light beam onto the dmd, but this already has some shape Secondly there is some Imaging going on and this imaging has a finite resolution so we can include basically a point spread function So this is basically like giving the resolution of the the optical system And there can be some offsets to to the camera to do the measurement to do something else to some noise basically And there's actually some additional noise appearing here as well. I see it Okay, and then we added one polynomial non-linear layer To include basically everything else But also the mapping from these 2d pictures that we have on the dmd To the word 1d virtual input and with this we basically set up our very simple physics inspired model That that should mimic basically our optical setup that we were using and then what we did is we trained this model Basically by by really I mean there we had to use the actual setup But we could we could use that and so this was like a test setup basically with light So we trained the model basically using Several patterns on to the dmd and then look like observing the output Yeah, so we could train this machine learning model and then we could go to the experimental Optimization yeah, and for this optimization just like maybe for the experts or if you're interested in this for for giving the feedback We use an algorithm which was which is called iterative learning control I won't go into detail I give you a citation on the next slide and if you're interested in that we can talk afterwards about this So what did we so what we did we do so you have some initial guess? So this is how the optimization works basically in the end you have some initial guess you feel you generate the 1d virtual input You feed it into your machine learning model you get some predicted optical output Compared to your target potential calculate the error put it in this black box Which is the iterative learning control and you update the virtual input and this you do as law Like many iterations such that you are happy with the result Okay, and this is what we called offline closed loop optimization So to say yeah, we can now do a closed loop optimization, but we can do the whole thing offline So without using our experiment system so the experiment system can do something else some interesting quantum simulation that we already prepared Yeah, but we can do in principle the whole thing as well with the system Yeah, we can also use basically the system itself. So do the optimization offline Okay So this was basically the idea and the idea of that so this is this is basically Summed up as well the model as well the results experimental site is summed up in this paper here And if you want to know more about it iterative learning control This is something which we got from our friends from the automation and control Institute So really people coming really from the control side were included in this is this you can find some details in this paper So what were the results? The results were the following so this is other like some examples of an optical potential where we optimized basically For some flat optical potential here in this regime and what you see on the right side Yeah, it's basically the error the root means square error in percent as a function of the number of iterations Okay, so you do this optimization several times So what the constant or what is it purple line here at ten percent? This is the result we get from the offline optimization Yeah, so this is constant because it has no it number of iterations using the system so to say yeah So this is just done with our physics inspired model So this is basically the level we can get without using the system after training the model And then what we compared to this is the green line. This is what we call a heuristic algorithm algorithm This is already used in another experiment in Vienna. Yeah, it's was basically our baseline This was used so far as for optimizing these potentials and what you can see is that like we can get to The same accuracy basically with our physics inspired model so without using the experiment So for which the the heuristic algorithm would need something like 75 shots using the experiment So we could already just comparing to this one spare a lot a lot a lot of shots Basically on the on the true physical machine. Yeah, so we we already gained some some big advantage here and on the other side what you see is our Our our also our feedback mechanism is works quite well. So the red line is basically just using the online Okay, you use the online optimization But now with the iterative learning control which was also not used before and then you see you You need something like 10 shots or so to come already to this level Which is the final level that we can achieve basically Yeah, and then in the end what we plan to do is now Basically to basically something like a pre optimization of line and then just do a few steps online using the machine So basically just do the final fine-tuning basically online. Yeah, this is what we envisioned So this was basically my first example of like how we used Machine learning or like a little learning model basically inspired by machine learning to to optimize Processes in preparing a quantum simulator. Yeah, and as I said, this this work is basically Summarized in this publication I just want to acknowledge already the people here because the people in the second project will be a bit different people So so this was basically the project led from the PhD side basically by my Martino Calzavara and by Evgeny Kuriatnikov So and we work together with the group of Andreas Deutschman and Andreas Kugy in Vienna and Thomas Okolaku and Felix Modso in Jülich. Okay. Good so Is there some questions so far to this part because if not I would Switch a little bit gears. Yeah, Chris. Yeah, so I mean, this is something. Yeah, this is obviously a question from an experimentalist But this is something which I think will I mean so to say in all its details only come up when we use it like for a long time on the experiment what we saw on the on the test setup so far is that it's it's rather stable actually and Concerning interferences and so on you mean like some some random speckles and so that's why I mean I had this I just flashed this shortly We were also testing out actually here using an incoherent light source Not a laser and we saw that this gives us definitely a big enhancement So in terms of train ability, let's say of the model I mean, I don't want to you know let me know not use this word train ability and not all not that all the machine learning people like Will fight me in the end. Let's say in terms of like how well we can do the offline optimization Using the incoherent light source works much better But if you have access to the online optimization The iterative learning control as a feedback mechanism works equally well for the laser as well as for the incoherent light source This is what I can say so far Okay, good then Let me come to my second topic and this is now Regarding like doing Hamiltonian learning for effective field theories. So what do I mean with that? I should give you a little introduction So what we are interested in our let's say physics endeavors Not that this was not physics But like you know like what we usually do in our laboratories is Quantum anybody systems. So we are interested in systems Like these yeah, so you have my you have many particles in the system Which are close to each other hopefully interacting with each other which have might have some internal degrees of freedom like here a spin or you know like they can tunnel as I already described it to you and They are their their properties are basically quantum Governed by quantum mechanical effects. Okay, and we already saw basically beautiful experiments for example again by for example in Julian's talk where Basically this this whole this whole field has grown Studying these systems really in a microscopic fashion So you do basically really like a bottom-up approach by assembling these systems in a microscopic fashion particle by particle on Lettacide by Lettacide controlling interaction controlling position We we rather take a so to say a contrary approach Coming from the big systems and understanding more and more By looking at those systems in a bit of like a zoomed out view, okay So we want to look at the large-scale low energy effective physics basically of these systems And then what might happen actually is that there might be new Emergent degrees of freedom on these long wavelength scales and okay surprisingly here in this example out of these like many little spins that were emerging basically big spins, yeah, and and we are interested basically into in in the physics of these like Spins or like systems or degrees of freedom emerging on the long low energy scale or on the long wavelengths yeah, so and So what we how we call this is basically we call this like quantum fields So we we seek descriptions of these emergent degrees of freedom by effective field theories Yeah, so the effective interaction constants should be captured by these effective field theories And now what we wanted to ask is Can we learn the effect of Hamiltonian? So like the effect of field theory Hamiltonian describing these physics from our experimental measurements and So for quantum fields, there were already basically investigations in this directions This was basically approaches using the equal time effective action Okay, now you might wonder who what is the equal time effective action never heard of this Yeah, this was the same for us because we were also not like trained that well in quantum field theory So the quantum field theories be In the audience might know but the others might not But we managed basically to measure like the equal time effective action in these like field theory systems this is a paper which was already cited this week and But we were all trained a little bit more in the language of Hamiltonians Yeah, so we wanted to learn effective Hamiltonians because this is somewhat more intuitive for us to Because we can interpret what is happening in Hamiltonian And so we took basically inspiration by by many many papers and I cannot cite all of them who do Hamiltonian learning for microscopic theories, you know But we wanted to basically translate those ideas to the realm of like effective field theories So What is the workflow? Okay, so let me let me guide you through this like very complicated slide But I hope you will get the ideas afterwards so if you think of like Hamiltonian learning what you want to do is you want to formulate an ansatz Hamiltonian Yeah, in terms of interaction terms in terms of interaction constants But the question you have to ask is what are the entities or what are the operators I put into my Hamiltonian? Yeah, and for spins for example sitting on a lattice. It's quite clear You put the spin operators on the different lattice sites and then you can think of like I have maybe a Z Z interaction I have a X Z interaction. I have a field in X direction. Okay, so it's quite clear in which terms You should formulate your Hamiltonian. However, what we do here is we prepare systems of ultra code atoms Which are like Icroscopically these are like atoms which sit on certain positions and interact with each other But what we are measuring is always a coarse-grained picture Yeah, so we get some coarse-grained Field like of thing which lives on a on a certain length scale basically, okay so And this is why you have to think about like what are basically the the coarse-grained fields I have to put into my effect of Hamiltonian and how this works is the following so what you do is on the theory side so to say you formulate an ansatz Hamiltonian now and This might depend on some masses some interaction scales Yeah, and then what you do is you have to basically coarse-grained This ansatz Hamiltonian So what you do is you basically write down the the generating functionals of all your Correlation functions and you basically integrate out high energy modes up to the scale Where you will measure yeah, and this will lead to basically a flow of the Hamiltonian in this like coupling space Basically, okay, so the coarse-graining brings you basically to some scale a on which the new Hamiltonian will live Okay, and then you will get some effective Hamiltonian Describing the physics at the scale a Okay, and the second line that you see flowing in here the gray one. This is basically Describing the system the true system because basically your system Hamiltonian that you will learn later and The microscopic Hamiltonian that you write down in your field theory ansatz They do not have to coincide because for us in our example to get very explicit It's atoms interacting via delta potential interaction and tunneling between two wells and later We will describe it by a sine Gordon field theory Yeah, and this will only match on a certain scale, but microscopically they are definitely not the same Yeah, so you have to basically translate your microscopic ansatz on to the right scale Yeah, and then what you do on the other side is you perform your quantum simulation experiment So you have your experiment Yeah, and then you have to you you prepare some interesting quantum state in our case for example here you prepare a formal state of a certain system Hamiltonian and then you do coarse-grained measurements and You basically store the results. So here in this case This is the field value phi as a function of the position X for every pixel that you have for example on your camera Okay, and you take many snapshots of those systems and you store the data On your computer and from this you can calculate Basically any type of correlation function that you can imagine having these fields, okay and now Everything now everything else you have to do. So what what did we do just to summarize maybe shortly? What did we do we formulated an ansatz Hamiltonian? We just under just I mean I did not do this. This is the field theorists work But we they basically integrated out the high energy so to say scale This is something they can do perturbatively so analytically so no basically solving you're involved of any theory Yeah, no numerics nothing and on the other side we performed our quantum simulation and basically calculated this correlation functions and all the rest we have to do is We have to do classical post-processing. Yeah, so from calculating basically Basically commutators between the Hamiltonian and certain observables Just like calculating on a field levels to say yeah, not not calculating the expectation values We can get constraints on These correlation functions. Yeah, and these constraints allow us basically to deduce the interaction constants Okay, so you see here's an I hope you see here's no numerical or analytical solving of the whole theory involved Everything we have to do is we have to formulate an ansatz Hamiltonian and do the measurements on the system This means we can do this whole procedure also in regimes where we do not have Can't where we cannot solve basically our ansatz Hamiltonian, so to say yeah, so it's not a fitting of somewhat generated Computer data, but it's really deducing the Hamiltonian parameters in that way without being able to solve the theory Okay, it's just that we have constraints that we can deduce from the ansatz Hamiltonian Okay So now that the confusion is finally arrived has arrived. Let's get a bit more explicit so in our case as I already described basically at the really low momentum scale like high energy scales basically or like at microscopic distances Our system will be described by both gas. So these are bisonic atoms interacting with each other However, what what was already found in other studies there or like you can also show Basically that on large the large wavelength physics basically can be this effectively described by a sine Gordon field theory So what we did is we basically formulated the sine Gordon field theory a Microscopic sine Gordon field theory and then basically Calculated the effect of Hamiltonian on the scale on which we can do the measurement and the experiment Okay, so on the other side you perform your experiment We do basically here interference measurements details do not matter so much at the moment Yeah, with a certain scale a and calculate the correlation functions and by mapping the correlation functions to the constraints We can learn this effective sine Gordon Hamiltonian. Okay, this is basically the workflow that we were performing Okay, so What did we do? So far that the results I will show you today are actually I Actually obtained using some numerical simulations. I mean I said we we don't need numerical simulations But so far We we tested basically this procedure on numerical simulations on the sine Gordon field theory in a certain in a certain limit Because this was much cleaner so to say in an experiment even though I'm the experimentist Okay, so what what are you doing? What do you do? You basically want to learn the Hamiltonian and now what we wanted to look at is what is the Hamiltonian as a function of Measurement scale. Yeah, so you do some numerical simulation This numerical simulation has some lattice scale a uv and now you do a coarse-graining basically of your results and As a function of the coarse-graining We do basically the Hamiltonian learning. Okay, and And this is these are the results so in the upper plot what you see the learned couplings of of our Effective field theory and they are normalized to the values that we know because now it's numeric So we know the value so we can basically normalize to our expectation and in the lower graph what you see this is what we call reduced chi squared This is basically Telling us how well did this like variational finding of the optimal parameters work? Okay, and This and both these quantities we plot as a function of the pixel size so basically of the of the coarse-graining size so what you see is Starting from a certain scale which is like on the order of twice or three times or like up. No It's actually eight times the so to say this a uv sorry for this like different notations here Yeah, suddenly our field theory learning works because like below that scale what you see is basically lattice effects Yeah, you see that this was done numerically on the lattice But the field theory description basically works starting from a certain scale So you see because like chi squared is basically equal to one Yeah, and what you see is that like the learned couplings coincide with the exact couplings that we put in This works until basically here so here here still chi squared is within the arrow bar equal to one But what you see on on the on the higher part here basically is that the learned couplings do not coincide Anymore with the except couplings that we put in but the coupling is basically flowing with length scale Which is somewhat reminiscent of like a flowing coupling if you do a renormalization group analysis of the field theory So there's more details to that and also a bit of analytics Concerning this in the paper that I would like cite you later So but you see we now have a method basically with which we can deduce something like a flow of Interaction constants as a function of the length scale that we are looking on it And so we can make basically study this are G type of flow basically of the of the interaction constants in our system Yeah, and then you see if you go to longer scales Okay, certainly it does not work anymore But at some point if the if the coarse-graining scale comes on the scale of your physics Okay, if you wash out everything with your coarse-graining that that maybe the description does not work anymore I think is also sensible Okay, let's but now okay now what we did is we took an answer to Hamiltonian From which we knew it should work and we basically found it works and we analyzed the couplings But it was already discussed this morning I think that like maybe sometimes you also want to put your answers as in terms of like what operators should I put yeah And this in some discussion this was coined like a discovery mode of our of our methods so to say Yeah, so what do we do we basically we write down in ansatz Okay, so this the details so to say do not matter But in a way what we do is we write down in ansatz and here the interaction potential V We now basically test different interaction potentials V Yeah, and we see how well does the word the learning work Okay, so what you see here is basically this reduced chi squared again as a fun Not as a function, but like for different Interaction potentials V. Okay, and the sign Gordon theory which I did used before for doing the learning is basically the cosine phi here and For us so to say the most relevant to compare is actually the phi squared because phi squared would be basically a simple non interacting theory It would be just like Gaussian theory and this is something you want to exclude if you want to Call your system maybe strongly correlated for example Yeah, so what you see here is you you will see soon three types of markers and these markers are basically for different regimes It has a bit to do with with like coupling strength and temperature does not matter so much But what you see is like for the two white markers basically For these two parameters the learning also works for only using a phi squared potential So this data is basically consistent with using a Hamiltonian, which just has quadratic So no interaction just a marshal theory However for q is equal to 3.2. This does not work at all Yeah, so we basically saw that like this this this data here is not consistent with using a quadratic theory Yeah, however if we use the cosine I mean the cosine always includes the phi squared so to say so you will get fitting here as well However, this this data can all also be represented by the using the cosine Potential yeah, and then you see okay. It depends on for the for the different potential So you see that like the basically the periodicity does not fit here very well So you get like a better fitting but not still good Yeah, okay, and then we can we can do this a bit more systematically So now what you see is to reduce chi squared as a function of this coupling coupling ratio q and you see For the blue markers is again the fight Fight to the two basically five squared theory that we used you see that there is a whole region Where basically the fitting does not work but for the red blood the red blood markers or the cosine theory it works everywhere and Actually the region that we marked here in gray is not marked by us by hand so to say but in gray color What you see is the magnitude of fourth order connected correlation functions So a measure for basically having a non Gaussian state and you see that this very nicely Coincides with the region where the learning with the fight squared potential does not work anymore This was very easy to say satisfying for us to see this was like we were very happy to see that Okay, good Basically this was the results I wanted to show you Concerning this part Yeah, because I saw in the paper there is much more, but I think this is this is basically I think you got now you got now the right the right flavor Unfortunately, so we also did this with some experimental data and it works However, we are still a bit limited by resolution resolution So we are more on the on the right side of the first plot. Yeah, so it works But higher resolution would certainly be nice and higher statistics as well But we are working on that and the other thing is Also the numerical data. I showed you so far was all in a regime where classical statistical description Is still working so it was rather rather effective field theory learning than effective quantum field theory learning, okay? But we are working currently on the experiment on methods also to bring these systems in regime where quantum fluctuations actually might play a role and The biggest killer so to say is temperature. So to at which temperature. Can you prepare those systems in a thermal like state? And for this I want to just so this is just a short teaser and just short Advertisement and just like something to think about for the last two minutes Okay, so how can we go more quantum or how can we go towards quantum states? Or how can we create like lower temperature states? So what we have as I said is we have two tunnel coupled basically Well, yeah with a tunnel coupling J and on-site interaction. So how do you how do you create those systems in a thermal state? How it was done basically Let's say historically or still done But this is a very efficient way is what you do is you prepare a thermal gas in a single well Yeah, and now you split this thermal gas into a double well system already Yeah, this we can do with high with a high level of control And then you cool down by evaporative cooling Yeah, so you blow away the hottest atoms and you cool down in the Bose-Einstein condensate state basically And then what happens is you will get basically a thermal state in this double well Where the temperature of this relative degree of freedom is basically governed by the temperature of the of the density degree of freedom So after some degree of freedom, okay? This is nice. This works very efficiently. You get very nicely firm at states However, the temperature of this some degree of freedom of the density is quite high This is the limitation I presented to you another way of preparing the systems This could lead actually to to new regimes or to like colder effectively colder systems So what you do here is you? prepare the thermal gas again in the single well and then you cool it down in the single well to the Bose-Einstein condense Vc state and then you do a slow splitting, okay, if you are so to say coming from the business of squeezing and entanglement in atomic condensates this was basically already the you know This was already basically the technique used by Marcus Oberthaler 2008 to create spin squeezing in a in a multi-well potential of Atomic bc's yeah, but there it was really like single mode bc's like just in the double well But but here we are using this technique basically now for our 1d coupled wires Now and this because like you know like basically you can see that like that the ground state of this with repulsive Interaction and finite tunneling coupling will feature some number and spin squeezing So it can feature entanglement and actually it was shown that like This entanglement or like every every amount of spin squeezing will directly translate of of a low into a lower Effective temperature in a pre-fermalized state and this were now many buzzwords But if you're interested into it You can either read this publication with which just appeared in PRX or you can just come to me after the talk And I'm happy to also tell you some more details what we actually really did in this paper. Yeah Okay, so with this Let me also acknowledge the people in Involved in this work. So as I said the hard work in in a way the Calculation so the RG stuff and so on was not done by me But was done by robot form like first mostly of by robot odd and torsten zache and this was basically a big Collaboration with Hannes Pichler and Peter Zoller and from the Vienna side with Amin, Sebastian Erne and Jörg Schmidmeier And the other people on there on the right side are basically people doing the work In the in the laboratory as well from the from the last project I showed you and if you're interested in what we what we plan also in the future to do Venkat is here. Yeah, the poster yesterday. He's also happy to talk about that Okay, with this let me summarize The first learning task I showed you was concerned with like how can we create? with basically feedback control optimal optic Optimal optical potentials. That's always not so easy And the second part was concerned with doing Hamiltonian learning for quantum field theories and with this I want to thank you for your attention Of the conformal field theory. Who you know you What do you mean with that? Sorry? Yeah No, no, I don't I I mean I I think I roughly get what you mean But no, we are really looking, you know for an effective description of our system and We really are after the after just the effective description Yeah, so this is actually I think I think this is Concerning Klein Gordon, for example, this is effectively what I show you here I mean or like basically on that slide, you know as a function of Q So, okay, I will start a bit slower So if you if if you write down the microscopic Hamiltonian basically of two bows of gases in a time in a in a double Well, okay, and then this was basically a paper by Polkoff-Nikov-Demler and Chris Krizev in like 2008 Then you can show under some approximation in some second-order perturbation basically neglecting some terms blah blah blah you can show that this system maps on to sign Gordon Yeah, but at this point it was not at all clear if all these approximations hold and in which it's with in which regimes It's clear and so all the works that were done before were basically surprising in that respect that it works over such a large area And what I show you here basically is here that like true sign gone So to say with the cosine potential it's only needed here where you have the gray area Otherwise, it's either a mass full theory with just a phi squared or it's you know, it's Yeah, use these things to find how to measure fundamental constant Yeah, yeah, no these things let's talk about these things. So no, it's very nice But so you mean like of QCD or so No If no, I don't yeah, I mean like if okay, let me put it like this if someone in that If if someone has an experiment that can access scales where QCD is the is a good effective theory Yeah, so I mean that I say these are bosons which live at a certain position in space It's also just true until a certain scale at some point QCD might be the right theory But so if someone has an experiment working on the on the right scale and can measure the Relevant correlation functions describing basically the fields appearing in the in the Hamiltonian then maybe you can use it Yeah, but I think they have better methods most likely in the QCD regime No, no, no question Or this was not very clear. So how you do this course granny different scales of the correlations Okay, this is this is okay. This is very this is so an experiment. Let me say let me start with experiment It's like supernatural I mean if you do if you do microscopy the size of the lens and the focal length of your lens defines Some some resolution limit. Okay, so there and then you have a camera with which you record and this has a pixel size So this very naturally gives you a coarse graining of what is happening, right? And in the numerics is just I mean the numerics is calculated on on a certain lettuce And then we binarized it new basically we introduced new pixels or you smooth it out with a Gaussian kernel that you just like you know Like convoluted and then this gives you a new scale which how many or correlations and you need Yeah, this is yeah, super. Thanks. This is a I mean is a very subtle but Very good question. So Okay, I mean I did not go in too much to detail how we how how we anyway get the constraints So what you do is the following you take your ansatz Hamiltonian you promote it on that scale are a where you work And now the question is how get how do you get constraints on your correlation functions? So what you do is you take your ansatz Hamiltonian and then you calculate the commutator of this thing with some observable okay, and The expectation value of this You know is Zero because I said we are in a firmate state and You know a commutator of age with any observable is zero because this is time time term evolution basically So and what this I mean, maybe I so what this is basically what what age? Commutator with some observer gives you is a basically a sum of some correlation functions now again Which are weighted somewhat by the interaction constants So by just using one of these observables you get will get a sum of correlation functions And these you have to measure okay now you can ask what's over what observables should I use? Yeah, best you use for example first Which are basically compatible with the symmetries that you write into your Hamiltonian otherwise these are non-trivial zeros Basically, but how many you should use I cannot give you a recipe in in general so you should use different ones and And so that you basically I think sample well enough like the terms so to say that you also have in your Hamiltonian So you should also have some high so I don't think you should use things that only give you second-order correlation functions Because if you have interaction terms in your Hamiltonian, then you should also sample higher ones But I mean we write this also explicitly in the paper There are some guiding principles that you can follow but I cannot prove you what you should use so you should basically use different types of correlation functions and So far what I showed you here is only using this fine, right? I told you there's the relative phase and there's also the relative atom number in The classical statistical regime that we were in those two factorized basically because you have only pi squared and the rest so to say However, if you go eventually in a quantum regime Yeah, then you will all then they do not factorize anymore So you will also have to measure correlations of pi and you will have to measure correlations of pi and Face yeah, and there are methods for the covariance matrix for this system already But we are also I could not show because of time, but we are also working on methods how to measure General correlation functions of pi and pi index permit by using some generalized Puvm type of measurements basically so by measuring both in the same experimental realization. I hope this No, we do some type of heterodyne. I would say let's put it like this. So you really can measure both quadratures X and P in the same experimental realization We will get in some noise from the heterodyne measurements of to say but you can measure them both as a function And then you can do really all the correlation functions up to any order. Okay. I might ask Follow a question to this so now now we understood a bit better all of these works at first It was a bit of secure. Yeah But but so in case in case you want to prepare then a state, which is which is not a thermal state Uh-huh as you as you from at the end Then this post processing needs to be really modified, right? Thanks, you're I mean thanks to all the question. You're basically asking all the rest said I did not show so So we stole basically one of the techniques that were already there for Hamiltonian learning This is for thermal or like basically a steady state you can use that the commutator is zero But you can also do quenches So you can also go you can prepare a state do a quench and what you then use is basically energy conservation The energy conservation also gives you constraints on your correlation functions Because it tells you what you have to do then is you have to basically measure the energy So measure the Hamiltonian Yeah in every time step and you know that the right Hamiltonian gives you conserved energy So and actually there are new subtleties coming in for the field theory because suddenly what happens if energy is not conserved Because excitations flow into a regime that you cannot measure Yeah, microscopically it's clear energy is conserved But here even though energy is conserved it could be that like your field theory Hamiltonian Shows you that energy is not conserved because excitations are flowing into a regime that you cannot observe And then you would need new types of descriptions in terms of open systems or whatever Or like if someone has a good idea what type of description you need. I'm super happy to talk about this I find this super interesting, but I I mean I'm an experimentalist Just thinking about these things. So yeah, super. Thanks for the question, but you can adapt us We actually we have in the paper. We have actually also Some analytics concerning quench data Thanks a lot how it works. Yeah, you're welcome. Thanks for the question