 So the next talk is by Frédéric Moller and he's going to tell us about applications of GHD and cold gas experiments. Right, yeah. Hello everyone. So I would like to thank the organizers for putting all of this together and giving me the opportunity to talk here. A lot of you might not know me because I'm pretty new in this whole hydrodynamic scene. But yeah, as the slide says, my name is Frédéric and I'm a PhD student in the Assumption Group of York-Schmidt-Meyer. And like over these past days, we've seen a lot of the interesting things that you can do with generalized aerodynamics. And what really interests me is how we can take this and apply it to experimental settings. So I would like to start, but just giving you a quick reminder of how we actually realize these 1-D bozer gases in an experimental setting. So if we have our cold cloud of bosons, which we typically achieve via laser cooling and evaporative cooling, we then load these atoms into a very tight transverse confinement, so very tight trap. And what we obtain is something that is effectively 1-D, right? So you can see here with my small illustration here. The Hamiltonian governing this is this one down here. But if the level spacing in our transverse confinement is sufficiently large, it is bigger than all the other energy scales of the gas, such as the chemical potential and the thermal energy scale, then our gas is effectively one-dimensional. And we can just ignore this transverse potential and we essentially end up with the Lieblinka Hamiltonian. Now, as we all know, the Lieblinka Hamiltonian is integrable, which means that we have some very interesting dynamics. This is most famously demonstrated in the quantum nuisance cradle. Here we have a 1-D bozer gas, which is in a longitudinal confinement as well. And we impart two large opposite momenta on this cloud via a bracket pull sequence. This causes the cloud to split apart into these two smaller clouds, which then climb up the sides of the longitudinal confinement, turn back and collide again. And owing to integrability, these collisions can persist for hundreds of periods if experimental conditions are good. However, we have to also remind ourselves that real systems only approximately integrable. There are plenty of sources of integrability breaking. We have heating and noise. Like, if your potential, if your trap is not completely stable, it's like an optical lattice and your beam is jittering a little bit. They can give excitations, they can give heating. We have atom losses that we just saw in the talk before, which is also a source of integrability breaking. And via this Boltzmann-type collision integral. So we have the standard GHD propagation equation here on the left side, accounting for the 1-D integrable dynamics. And then on the right hand side, we have the transverse dynamics now with these different collision integrals, where each collision integral then corresponds to one of these processes of excitation and de-excitation. And just to flash the collision integrals. So they look long and cumbersome, but they're actually rather easy to evolve. And what you will notice is that basically all of these quantities in here, all of them are already readily available from GHD. The only thing we have to include from outside is this one number, which is the length scale of our transverse trapping. As long as we know that, we can evaluate this using only quantities available from GHD. Now, you'll also notice the fermiotic nature of the quasi-particles in the binning model manifesting in this equation, since we have both quasi-particle densities and whole densities. And we also have this factor one and a half here. It's sort of a heuristic factor we introduced since we have these two excitation channels and we're only really using one of them. We have added this factor of a half. Looking at numerical simulations, it didn't make sense having a one here. And after we added a half, everything made much more sense, essentially. So what happens then when we apply this and try to compare with the experiment. So this is our results here. There's a lot of stuff going on, so I will just go through it row by row, essentially. So what I plot here in the first row, this is the quasi-particle density, evolved just using the standard GHD equation. And as you will see, the Bragg peaks are clearly visible initially, and they persist throughout the entire evolution, as we would expect from integrability. Meanwhile, our extended model is here in the middle row. And in the first roughly 10, 12 periods of the cradle, the evolution looks very, very similar to what you're seeing for the standard GHD. But over time, you can see the second component manifesting, and the difference become very apparent because through the second excited state, we can now take atoms from the Bragg peaks with large opposite momenta and then rescheduling them to different points in the phase space, essentially. And at the very end here, you can sort of resolve features of the initial Bragg peaks, but then you have this big thermal background showing the onset of thermalization. Now we can compare this to the experiment, although it's not completely straightforward. So as was so nicely explained in the previous talk, if you have like a box of bosons, 1D, and release a box, then you can say that the repetities are like the asymptotic velocities. The problem here for our cradle is that the longitudinal confinement is actually coming from the beams creating the transverse confinement, meaning to release the atoms, we have to release everything. So these are not expanding in 1D anymore. They're expanding in three dimensions. And when you're expanding in three dimensions, you're actually the asymptotic distribution, it's no longer the repellent distribution, but the bosonic momentum distribution function. Now luckily, we're sort of saved in the sense that when we're deep in the degenerate regime, we can use methods to estimate the momentum distribution from the rapidity distribution, and we do that to help fix the initial state of our simulation. Now as we go towards the non-degenerate regime, the two distributions, rapidity distribution and momentum distribution, actually become increasingly similar. And in our gas, we also see this in the cradle, owing to defacing, the density drops in the cradle as atoms get distributed all over the phase space, and this drives us towards this non-degenerate regime. So if you look down here, in the final row of the figure, we are directly comparing the momentum distribution to the rapidity distribution. We see large differences initially, but as time goes on, we actually end up with some result showing an experimentally measured profile, very clearly resembling our simulation result for our extended model. Now to quantify this a bit further, we can also look at the variance of profiles over one period. This has the advantage of being a bit more insensitive to the whole momentum distribution versus rapidity distribution discussion, though nevertheless, we do see a large difference at time equals zero between the experimental curve and the simulated curve. However, as time goes on, after roughly 20 periods or so of the cradle, we start to observe a really good agreement between our extended model and the experiment, while if you look at the standard GHD, it follows the same sort of trajectory as our extended model initially, but then they depart after, since again, the variance doesn't change here, simply because we keep just having our practice throughout the evolution. Another very interesting thing is what you can see down in the inside here. We plot this excitation probability, and what you will notice is it's rather small. It only goes up to around 3%. And these simulations and experiment was carried out for 80 atoms in a tube. So this means that 3% of the 80 atoms with that is like two atoms. So two atoms in total in the transverse excited state is able to make such a massive difference in the dynamics over time. And this really goes to show that if you want to use GHD at intermediate to long time scales, you really need to take into account these effects of intercability breaking. Now, quickly, I also want to just plug this numerical framework that I made called iFluid. So all of these simulations were carried out using this framework, and it's essentially just made as a platform on which you can build your GHD application. So it takes care of all the nasty works with indices and stuff like that and implements all the basic GHD functions, and then you can do whatever extension you want on top of that. As an example, I have a screenshot here. This is the function I use to calculate the collision integral. As you can see, it's just a couple of lines. It's really simple to realize actually. And then behind the scenes are sort of this framework is just taking care of all the computations for you. So it's not just meant for experimentalists, but if you want to code some application quickly, it can also be worthwhile as a theorist, even though I know theorists love coding their own numerics. So yeah, I think I will just finish up now. Thank my collaborators for the great work they put in. We put up a paper in the archive yesterday showing this progress, and it's been very interesting the last couple of weeks. There's been a tremendous attention towards this whole interoperability breaking, and it's great to see that we're finally pushing in a direction where we can start and take these great methods and test them directly in the lab. So yeah, thank you for your time. So we're back on time. We have a lot of time for questions. Yeah, pretty quick. Can I ask you a quick question? Yes, sure. Yeah, so you wrote some collision integrals in general such collision integral would involve complicated form factors, not kind of things. So how did you obtain or just write these collision integrals? So I think you will get a much more satisfactory answer for my collaborator who actually wrote up the collision integral. But most of it is just, if you look at the components, right? It's mostly just you have the incoming momentum difference of the atoms, right? Which are these rapidities, theta and theta prime. And then if you look at this component here, this is just an excitation probability which depends on the momentum. So these are the incoming momentum and these are the outgoing momentum essentially. So this is what you end up with. You basically just have like two grids, you have your standard collision, your standard repeated grid and then a collision grid. And then you have the densities where you have this, yeah, fermionic nature of the linear button manifesting in the sense that you cannot scatter into a state already occupied. Yeah, so basically you're writing collision integrals for the fermions basically. Basically, yes. Which is, yeah, which is not so obvious. I mean, as we saw in the talk of Jerome, if you think about losses, it's very different in terms of fermions of bosons of the K. Exactly. But here we have to, we also have the assumption that we have very, very few atoms in the excited state, which means that we're not running into the issue of having to count whatever states we can scatter in there. We just scatter into the excited state. We only have to worry about it when we come back into the ground state where we have the majority of the atoms. Yeah. Okay. Yes. May I ask a question? Yes, please. Yes. You mentioned, so I'm not, I'm not sure I did understand at the end when you compare theory with experiments. So you compare rapidity distribution to measured momentum distribution. Exactly. Because you mentioned also, as you said, for degenerate gases, there is a way to compute momentum distribution from rapidity distribution. What are you thinking about? So we are using, so we have developed this scheme to sort of, yeah, estimate it using, I think it's TAN's contact and stuff like that. It's also one of my collaborators we've been working on that. We used it mainly to fix the initial state, because once the whole evolution starts and the peaks start mixing, it gets a lot harder. But it's a combination of fitting thermal states and then this TAN's contact. Okay. But what is for sure that finally you compare rapidity distribution to momentum distribution at the end? Yeah, exactly. Okay. So we're not fully non-degenerate in the end. So it's still, it's more of a qualitative comparison than a direct comparison. But still, we see good agreement and we know that we're, although we're not quite non-degenerate after 60 periods, we're close to. So this comparison does make sense. And I would say if for the extended model, it gives sort of, I mean, within the cradle, the atoms are evolving according to the rapidities. Right? So still this is extended model, so it gives you a view and insight of what is happening inside the cradle. Now I would say, of course, it would be very advantageous if we could compare directly a measured rapidity distribution to our simulation. Unfortunately, this data is very, or relatively old and the experiment was constructed before this was a concern. So if you could go back now and we could change the experiment, we would build it in such a way that we would measure the rapidity distribution directly instead of having to rely on this comparison between two different quantities essentially. Yes. So here you said that your guys are actually trapped in an optical lattice. Yes. Yes, it is. So it's not the atom chip set up? No, this is not the atom chip experiment. This is a reticent optical lattice, which is why so you have these two counter-propagating beams meeting and this gives you an array of 1D tubes and the harmonic confinement, which is actually slightly animated here, comes from simply the profile, the intensity profile of the beam, which is Gaussian. That is also why, so there's only this beam creating the entire potential. It creates both the transverse and the longitudinal potential, which is why we end up in this situation upon the measurement because when we have to release the longitudinal potential, the longitudinal potential is made of the transverse potential in a sense. So you cannot release one or the other. That's why we expand in 3D when we do the measurement and why we end up in this situation with the momentum distribution. May I ask a question? Yes, please. This is Sasha Abanov, hi. So the question is the following. Your model of losing particles into this transverse mode is pretty detailed. So you basically use some particular model for collision integrals. Have you tried to use something in the spirit of previous talk? Say that there is some rate of losing particle in the transverse modes and there is some rate of coming back and small and just try to see how this distribution evolves. No, we have not tried that, but it would be very interesting. One thing to note though is, I mean, of course you have to also construct in a way where you can simulate it numerically, right? Because if you look at our time scales, these are pretty long simulations. We're simulating up to 60 periods of the cradle. Sorry, what I wanted to say is that you basically could have used something like Jerome's solution given in the previous talk to simulate GHG. I mean, it would be very, very interesting to try and see what would come out of that. But I mean, he mentioned himself that the calculation was very cumbersome, right? Whereas this calculation that I'm showing here, this is like three hours in my laptop. So it's also about making it useful in a sense like you can actually, if comparing to the experiment, it's tricky because often you have to try to fix the initial state can be really difficult in GHG, especially when you just have a measurement of a momentum distribution. We don't even know how it looks in real space. So trying to fix the initial state, trying to get the right temperature and all that stuff, it requires multiple runs just to mark down where is our parameters. Are we in like a regime where we're extra sensitive to perturbation in parameters, in a stable regime where we are confident in our simulations. And if you had to run multiple day simulations every single time for every parameter, I mean, it would be a very, very long project event. So it's also about the applicability of the model. Yes, if I can comment about this, I do not expect our reason to apply here because for you do not couple, I do not expect to be able to simulate with a lean blood equation with the Markovian process because here when you consider just a collision with two initial atoms, there is only one possible final state if you have to conserve a momentum, you put one atom and then I would not describe this as a Markovian process. I mean, we are also observing relatively low atom losses in the experiment. So on this time scale, I think it's up to 5% atom loss, but also as I shown, we only have 3% in the excited state and it makes a big difference. So I don't know if a 5% atom loss would also have additional influence. You can see for example here, at the very bottom right figure, you see our tails are a bit more pronounced, whether this is due to the whole momentum distribution versus the potential distribution or in general we also observe that we formalize a little bit quicker compared to the experiment and this could be due to just processes that we are not modeling here, like losses or taking into account small contributions from the first excited state. There are so many different things going on in this experiment and we have just taken the thing that we believe is a prime driver of formalization and explore that. So I just checked that the atom loss was over the period of 5 seconds, a few percent. I will be looking there as a period of half a second, so it's much, much less than a percent. Okay, so less than a percent even. Yeah, and the heating was something like a tenth of an H per omega over the 5 seconds or even much less than a tenth of an H per omega over 5 seconds. But that regime, it was basically, it's completely negligible. It should be said that a lot of this is owing to the red deterrent potential. So even though it's sort of a cost benefit, so if you use a red detuned lattice, you can have these very, very low heating rates and very low losses, but then it comes at a cost that you're measuring the momentum distribution function rather than the rapidity distribution function. So it's also, yeah, a question of practicality. All right.