 Sorry, I just forgot to display. OK, so last time we picked up with these pictures where you see the appearance here, in the absorption images of a Bose-Eye shine client state out of a sample background. So we've discussed how we can trap, manipulate, and image these client states. And so in the beginning for this lecture, I'd like to give a few properties of these gases. And in order to understand these properties, it is fundamental to account for atomic interactions. So in general, when you consider the true interaction potential between two atoms, which are in general complex internal structure, you will get a shape which typically is like this with a long-range tail, which reflects the fact that at long range you get Van der Waals interaction between the electronic clouds. At short range, it becomes much more complicated. And you get all kinds of electrostatic interactions between these electrons that form the electronic shells of these atoms. And typically, these potentials at very short range can be very strong, with the depths that can be up to several thousand Kelvin. So it's significantly higher than the energies we're discussing for cold atoms. But this happens at a very short range, typically an atomic scale. So B here, the range, is a few angstroms only. And these wells are deep, so they can support many, many bound states, which corresponds to diatomic molecules that you can form with two atoms, two like atoms that collide. So if we have to take into account all this potential for the bound states and so on, it will be very daunting to attempt the description of a many-bodied system interacting with this potential. So fortunately, when you work at very low temperatures, the collisional energy is also very low. And then the theory of scattering in quantum mechanics tells you that the scattering amplitude, which captures all the properties of the collision, boils down to a single number called A, which is called the scattering length. And basically, all the details of the collisions are buried into the same number. And so if you are interested in things that happens on scale much bigger than A, where basically the details of the potential are sort of washed out, then only to know is the value of this scattering length. And then you can predict what happens for many atoms interacting together. So that leads to this strategy, which is called the solopotential method, which is that in order to describe the gas of many particles interacting with one particular potential, instead of choosing the true form of the potential, you use a fictitious one. And we tuned the strength of that potential so that the fictitious potential, the solopotential, gives you the same scattering length as the real one, which normally the scattering length is extracted from experiments. So the most common choice for a two-body potential, which depends on the position R1 and R2 of the two scattering particles, is just to take a contact potential, or Fermi potential, sometimes in the literature, which is just a delta function. So exactly when the two particles coincide in space, they interact and otherwise not. And the strength G here of that Fermi potential is related to the scattering length with this expression. G is equal to 4 pi of bar squared A divided by the mass. And so when you use this, you can show that you can reproduce the same scattering length. And then the scattering cross section, 8 pi A squared, is also the same, provided you treat that potential in the Born approximation, the lowest order in perturbation theory. If you treat this potential differently, you have to be careful. This type of delta function interactions is known for quantum fields to give divergences. And so you have to sort of do a renormalization program to avoid these emergencies, which is a usual situation you encounter in quantum field theory. But in what we'll be discussing, this simple form will be sufficient, and no normalization will be necessary. OK, so I'm going to take this approach here. And now I'm going to try to give a brief account of what you do when you want to describe the properties of a gas, of n bosons. So they're in a trap. So you hear the trap potential, which is, for example, the optical trap discussed last time, and interacting via this pairwise interaction potential, which I will replace with contact form of strength G. So now, generally, this is a problem that we don't really know how to solve. So there are some ways to calculate the ground state, the diamante Carlo, and things like this. But in general, there is no solution. So there are many type of approximations that have been devised, some simpler than others. I'm going to discuss the simplest one here, which belongs to what Norbert shows called the conventional phases, let's say, where basically you just take that your many-body wave function, psi of r1, et cetera, rn, it's just a product state of one, the same orbital phi, which depends on r1, et cetera, rn. But this product state is not necessarily the trap ground state. So this would be exactly what the t equals 0 ground state for a system of ideal particles. But here, you assume that the structure of the many-body wave function is the same. So it's a product, you neglect all correlations. But now, interactions will come about in determining this phi here in a self-consistent manner. And so in details, the way it comes is that you write down a variational energy for this psi here, where phi is a variational parameter. So this gives you this energy function where you recognize this bit corresponds to the kinetic energy that is stored in this wave function. So it depends on the spatial variation of phi. The potential energy simply depends on the modulus of phi squared. And now, the interaction energy here, which basically picks up the value of phi at the same position, gives this term, which is a scale that's phi 4. So it's not acting like a potential, but it's really acting like an interaction where four wave functions, probably the four wave functions will be included. OK. And so this is the average value of the Newtonian for that particular variational wave function. And in the variational method, if you want to look at the best approximation of the ground state you can find with this particular ansatz, you just minimize with respect to phi star. And you want to deal with normalized wave functions, so you also apply the additional constraint that the norm of phi should be 1. So when you do all this, this minimization gives you the so-called Gross-Petrovsky equation, which looks like a Schrodinger equation. So it's an eigenvalue equation for psi, where you get kinetic energy operator here, the potential operator here, like the Schrodinger equation. But there is additional nonlinear term, proportional to g, and to the modulus squared psi, which represents the effect of interactions in a mean field theory. And now the eigenvalue here is not the energy of the state. It's basically the chemical potential of the gas. It can be interpreted as the chemical potential of the gas. And what people normally do is to introduce a wave function, which is just related to phi by multiplying by square root m, with n is the total number of atoms. And this is what people call the coincide wave function. And now if you remember that in quantum mechanics, and you can interpret the wave function, the modulus squared of the wave function as a density of a probability fluid, so to speak, then the modulus of psi squared will simply give you the modulus of the total density of your gas. So this is an example where the coincide wave function changes where you increase the interaction strength. So I'm defining here a normalized interaction frame chi, which is proportional to the scattering length and to the total number of atoms. And I'm plotting the function of position, the modulus squared of psi, for different value of interactions. So chi equals 0 is for the interacting gas. So there you basically condense into the ground state of the harmonic potential. And so your wave function here is identical to the ideal gas wave function. So basically the Gaussian harmonic oscillator ground state, which is shown in blue. As you increase the interaction, so you increase this chi parameter, what you see is that very soon you will see deviations from the wave function from the non-interacting system. And as chi increases and becomes very large, then the non-interacting wave function is very, very different from the actual coincide wave function, which takes this parabolic shape here, the so-called Thomas Fermi limit shown in red. And so this shows that for this type of strong interactions, let's say, the actual shape of the system is determined basically almost entirely by interaction. And the trap potential only enters to determine this parabolic profile. But mostly interactions play an important role. So this Thomas Fermi limit is an important one because most experiments are performing that limit. So there is an exercise in the problem sheet where this is studied in more details. I don't want to emphasize this too much here. I want to just to show an experimental result, an early one actually in 1998, showing typical numbers for BC of sodium with a million atom approximately. And while you see that the experimental data here are very, very different indeed from the harmonic oscillator ground state, which is shown in dashed line, and are well reproduced by this parabolic shape corresponding to the regime of strong interactions. So maybe at this point, I also want to make a comment on numbers. So in the talk of Gerhard Kirchmeyer, we had some energy scale for Josephson junctions, which was under 20 gigahertz range, or Kelvin. So here, I want to emphasize this small foot note here that our energy scale now is all of magnitude below, basically. Basically, the typical chemical potential, which will give you the interaction energy of the cloud, will be in the range of 1 kilohertz approximately, which would correspond to 15 nanocalvins. And so what this means is that we have systems which are much, much the dynamics of the system, the typical time scales, will be much, much slower than in typical solid state experiments, which should have some junction or else. And so this is only bearable because we're also better isolated. So we are, again, we work with systems which are held in vacuum and completely isolated for many material walls with very few impurities and so on. But I think it's interesting to notice this difference of scales and of energy in these two systems. So I won't say too much about the properties of BEC, but I will say a word about one of the most important of them, which is our superfluid character. And so by superfluid, typically what people mean is that when you set the system into motion, it will undergo a dissipationless flow without viscosity. And that interpretation actually follows from the interpretation of this psi as a wave function. So again, it's a probabilistic interpretation of Schrodinger's equation. You know that the gradient of the wave function is related to the current of the probability fluid. So it's the same here. So if I called theta as a phase of this wave, called a condensate wave function, then one can associate the velocity, which I call vs for superfluid here, by h bar over m times the gradient of theta. And this turns out to be the velocity characterizing the flow of a condensate, or more generally, of a superfluid. It's also very similar to the case of superconductors where theta is not the phase of the superconducting order parameter, and where a similar expression will hold. Now, a classical experiment to emphasize the different behavior of superfluids is the one of the rotatin buckets, which is a sketch shown here. So you consider a cylinder, for example, cylindrical, and you rotate it along the z-axis at some velocity omega. So if you have a classical fluid that fills this container, then it will basically be dragged along by friction with the walls, and it will start rotating with a container. So in the end, this will end up with a solid body-like rotation. So the velocity field will be given by the rotation rate cross position, which gives you a vorticity of omega equal to the rotation rate. That's what you expect from a classical fluid in this type of experiment. But now, if you have a superfluid inside this bucket, then this puts constraints on the type of flow that can occur inside our system. And so the fact that the velocity must be the gradient of a function leads to necessarily irrotational flow. So the rotational of a gradient is always 0. And so it's impossible to fulfill this condition. So if you seek at this level, then what you conclude is that also a classical fluid will be dragged along with a rotating cylinder, a superfluid will in fact stay at rest and will basically not fill the wall at all due to the absence of friction with it. That's not the end of the story. It's actually what's happening was for slowly rotating the container. But that equation, there is a way to escape that conclusion. And the way is that this is only valid for smooth functions. But this type of equation are not valid if you have a singularity in the phase theta, and therefore the superfluid velocity field. And this singularity, which can occur, it's a point where the phase is ill-defined. And for a complex field, this will occur at a 0 of the modulus of the field. So at a point where the density vanishes. And so in classical hydrodynamics, this type of fluid configuration is known. It corresponds to a vortex line. So here is an example, a timely example of a hurricane. It corresponds to a radial, a pure vortex line would correspond to a radial velocity field. And it's characterized by the circulation of this velocity field, which should take along a close contour around the vortex line, which takes some value C. And in that case, the vorticity associated with the velocity field will be given by C times a two-dimensional data function and oriented along Z. So that's a way to escape the 0 rotational condition that's imposed by the form of the superfluid velocity. But now there is an additional condition which echoes also what Gerhard told about Josephine Junction, which is that the circulation along the vortex line, around the vortex line, sorry, must be quantized for a quantum fluid, for a fluid described by a microscopic wave function. And the argument goes pretty much along the same way as for flux quantization. You basically look at the circulation of the velocity along a close contour around the vortex line. So since Vs is proportional to gradant theta, you can map this integral into the difference of the phase between one point and after a complete term. And then the wave function, in order to have a meaningful interpretation, must be single-valued. And so there is no way but to have this phase difference to be 2 pi times an integer in order to keep the wave function single-valued after you make a round trip. And so the conclusion from this is just like the flux for superconductors, the circulation of the superfluid velocity will be quantized with a flux quantum H over M times some integer S, which can be positive or negative, and which characterizes a so-called vortex charge. So these things have been realized in experiments. And so these quantized vortices have been realized in experiments. And they provide really a very strong evidence that the system is behaving like a superfluid described by a microscopic wave function. So briefly, this is a sketch of the experiment. So basically the idea is to use some optical traps again, which is off-center with respect to the cloud. And then you basically drag the trap along the edge of the cloud so that due to the dipole force it produces on the atoms, they will be set into rotation by the rotating laser. So it's physically rotated using deflectors. And so what people observed is that when you try to set a BEC in motion like this, you first observe very, very violent non-equilibrium dynamics. So the system basically undergoes violent heating because of the amount of energy you pump into it. It typically goes in a non-equilibrium way into a DC. And then there is some turbulent relaxation, which can take up to one second down to an equilibrium state with the system having acquired some angular momentum because of the rotation. And when in frequencies and sets to equilibrium again, typically above some rotation frequency, you will see the appearance of a hole in the density pattern after time of flight, which is a signature of this density hole associated with the vortex. And when you steer the system higher and higher, the single vortex here is accompanied by many more that tend to organize into a regular pattern, like you can see here. So these examples are taken for very large vortices, hundreds of 100 vortices typically. And you see that they tend to arrange into a triangular lattice, which is well known from an analog situation in type 2 superconductor, which had driven above some critical field, were also there. So the superconductor will tend to expel flux, just as the superfluid tends to resist to rotation. But this goes up to a certain critical field, just like rotation here kicks in with the first vortex at a critical rotation speed. And above that, you get many vortices. And they always tend to arrange in their equilibrium configuration into a so-called abricose of lattice, which is triangular array there. So there is a very close phenomenology between the two situations of superfluids and type 2 superconductors in this respect. OK. So are there questions about this first part? OK. So if none, I'm going to move to the second part of this lecture. And I'm going to switch top a little bit. So now, we assume that we can produce and study both Einstein-Cannon states. And what I want to discuss is what happened to these condensates when you load them into a particular potential, which is depicted here, which is a periodic potential. So the way to produce them is relatively simple. You basically use laser light. And you have several beams coming in onto the atom from different directions. And now, if these beams are mutually coherent, they can interfere. And this interference will produce a periodic pattern that can trap the atoms. And this periodic pattern will basically play the same role as the ionic crystal that trap electrons in a solid. So there are very close analogies between the two. So the simplest configuration would be a standing wave here. So we have two counter-propagating plane waves. And then they will just make a sinusoidal modulation. And if you generalize this configuration, so you will trap the atoms basically at the node or anti-node, so the standing wave. And you can generalize this to 2D by just applying the two counter-propagating lasers along two orthogonal directions, or 3D, by using three pairs of lasers. So in that case, you would have a 1D potential trapping the atoms at given position in planes, so to speak. In this case, you will trap the atoms in a 2D array, 2D square array. And in that case, you will realize a full 3D cubic array that will provide an overall confinement. And many more geometries are possible, playing with the shape and the orientation of the lasers. But I will concentrate on this type of potential for this lecture. So I'm going to actually address that afterwards. But yes, the idea is that the atoms can move from plane to plane. And that's really the field, the full periodicity of the potential. So you can also work in a regime where all these guys are just isolated traps, and they are completely independent from each other. But what's more interesting, well, in the point of view of many body systems and else, is really when all these wells can communicate with one another. And I will come to that. Well, before I do, I will just list three reasons why it's interesting to try to understand what's happening with these periodic potentials. So the first one is, well, there is a clear connection with analogous solid state systems, so electrons in a solid that fills the periodic potentials from the unique matrix. And so you can study a lot of band structure and a variety of phenomena. And this has interesting consequences. So one consequence is that you can use the particularities of the periodic potential, of a quantum particle in the periodic potential as a tool to manipulate the external degrees of freedom of the atoms. And that can be used for realizing optics for atomic matter waves and interferometry. And so I will show an example where this can be used for high precision measurements. And the second part, optical lattice can also give you a path to go away from the conventional phases I was discussing before, where everything was essentially a product wave function where you could neglect to very good approximation correlations between the atoms. In optical lattice, it's no longer true. And you can build regimes where you have strong correlations between the particles. And you can realize strongly correlated gases and new phases and interesting phases of matter. And of course, that third part will be the one where it's interesting to possibly use these systems for emulating other, possibly complicated, many-bodied systems. So in this talk, I will concentrate on the first two items. Today, I will concentrate on the first two items. And so this slide is just to kind of precise what I've said in the introductory slide, that when you have mutually coherent plane waves, which we'll use to approximate our lasers, then basically the total intensity pattern, which will determine the trap potential failed by the atoms, will be a coherent sum of all the different interference terms here. And so what you can see from this expression is that if k n denotes the wave vectors of the interfering waves, then the total intensity will be modulated as a difference between all the possible differences between the wave vector that you've sent in. So the simplest example, of course, is a standing wave in 1D, where you have one laser with the vector k l, another one with the vector minus k l. And so that gives you a potential with a 2k l x modulation, meaning that the period of this potential, d, is lambda l over 2. So typically, lambda l, the wavelengths of the laser is in the micron range. So you face a periodic potential with a typical period of several 100 nanometers, or alpha microns, let's say. So much, much bigger than the typical solid state lattice. And so the depth here, v0, can be controlled by the laser power. And typically, you have a fixed geometry, and you can control whether the potential is deep or less deep. All right. So once you have one standing wave, it's easy to generalize to 3D by applying a standing wave along each direction, x, y, z. But then this potential is separable. So at least when you are dealing with single particle properties, it's sufficient to analyze a 1D case. And then it's easy to deduce what's happening for 3D just by multiplication. So if you have some more complicated lattice potentials that can also be produced, you will have to generalize this slightly. But for this talk, I will stick to the cubic lattice case, and for now, to the 1D situation, with only a standing wave. So it's interesting to look at the natural units that emerge from the problem. So the lattice spacing, as I said, is half the wavelength, and can also be expressed as pi over the wave vector of one of the lasers that creates the lattice. And so to this wave vector, you can associate a momentum, so-called rockol momentum, which is h bar times kL, the momentum of one of the photons that forms this wave. And to that recoil momentum, you associate a recoil energy, which is the kinetic energy of an atom of mass m with this momentum. So here, it's h bar squared kL squared over 2m. And so it's useful to use those units when discussing problems in an optical lattice. So set length in terms of the spacing, momentum in terms of h bar kL, and use ER as the unit of energy. And the typical values are, as I said, a few nanometers for lattice spacing, meaning that this ER here will be also in the order of 1, 2 kilohertz, typically, for regular atomic species. OK. So the first thing I want to recall now is basically the basic notions of pen structure or theory to describe the quantum motion in such a periodic potential. So I'm taking the complete kinetic plus potential Hamiltonian here, and I'm reminding here of basically the content of Bloch theorem, which allows you to form a general form for the eigenstates of this Hamiltonian. So the basic step is to notice that if you define an operator, td, which just translates the position by one lattice spacing, d. So it's exponential of i momentum operator d over h bar. Then this Hamiltonian here is invariant, because p is obviously invariant. It commutes. And this one is periodic, so it will be unchanged if you translate it by one spacing. OK. So what that means is that td and h commute with each other. And so you can find simultaneous eigenstates for both operators. And so this is basically the content of Bloch theorems. So you can write the simultaneous eigenstate of the Hamiltonian and of the translation operator, so-called Bloch waves. In the following form, phi and q is given by a plane wave, exponent of i qx, times a function, u and q, which is periodic with the same period as the potential. OK. This one I will call the Bloch function. So you notice two indices to index those wave functions. And here will be the band index. So they will label basically the energies. And q here is a quasi momentum, not to be confused with the true momentum. And q basically corresponds to the eigenvalue of td. So td acting on phi n will give you exponential i qd times phi n q. OK. And so this distinction is important in the sense that when you consider how many states you need, the quasi momentum here, which is an index that comes from the symmetry of the problem and not from any energetic configurations, you will see that the quasi momentum defined from the eigenvalue of td will stay the same if you change the quasi momentum q by some multiple 2 pkl of the reciprocal lattice of our 1D lattice. OK. So if you take this qp and you apply td over phi n q plus qp, you will find another wave function. But with the same eigenvalue as phi n q. OK. So what that means is that if you don't, if you are not cautious, you are going to count, if you let q run without restriction, you are going to count states many, many times. OK. So it's possible with caution. But the most common approach is to avoid double cutting altogether and then restrict q to the first real one zone, basically the primitive cell of the reciprocal lattice, where you are sure that you will count each state one and only once. In the experiment, we do not. And we do the same as usual, which is to assure that we have a big enough system that bondarism don't matter. OK. That's more or less the idea. So you don't have to infer them in the theory, of course. But just, of course, when you want to do any type of calculations, at least analytically, it's much easier to deal with paradigm boundary conditions. That's instead of sine and cosines and so on. It's just some usual commodity like you do. And it's only justified because your system size is supposed to be big enough that basically surface effect are not so important. When experiment, we have a cubic size, well, a volume of 100 sites in each direction, roughly. So you have 10 to the 5 atoms, let's say, and 1, 2 atoms per site. So the typical size is 100. So I'd better agree that. But then, of course, it's not uniform either. I will come to that a bit later. But there is also harmonic trap and so on. So here I'm speaking of a uniform system for simplicity. But it's really only an idealization any way of what exists in the experiments. But it would be the first step, let's say. OK, so yes, to avoid double-cutting, so this quasi-momentum index would be restricted to a range of value where each state is counted once and only once. And this corresponds to the first big one zone, which, in my units, will be given by the interval minus kl to kl. And kl, again, is this laser wave vector. So the size of the Briegel one zone is 2kl in the natural units that I've introduced. OK, so now, this other index, you cannot say anything about it. You have to, this other index here, which determines, basically, the energy structure, the band structure, you cannot say anything about it without solving explicitly the Schrodinger equation. And there, it's useful here to take into consideration the periodicity, meaning that there exists a Fourier decomposition, a Fourier series to decompose both the potential. So since the entire case is particularly simple, you get only the first harmonics. But also for the block function. So you can expand the block function on these phases of plane waves, qm, where m, again, is a multiple of 2kl, the fundamental momentum. An integer multiple of 2kl. And so to summarize this, the block functions are superposition of all harmonics of the fundamental momentum, 2kl. And of course, the lattice potential, we couple a given momentum, p, with p plus minus 2kl. And this is what builds up the particular shape of the block wave. And it's also useful to solve numerically this equation, because then you have to solve for this potential. It's basically a three-diagonal matrix, so it's very easy. And so this is the result for few values of the lattice depth. Here, I have no lattice. So this is a particular trivial, but still particular instance of a periodic potential. And in that case, you expect that the dispersion or relation, energy versus momentum, will be parabolic, p squared over 2m. So momentum and quasi-momentum are different objects. Momentum is unrestricted, but quasi-momentum is restricted to the first Brillouin zone. So the labeling between the two in this band scheme here is that momentum k is related to momentum q plus 2nkl. And n here comes a number of arches of the parabola that you have to deduce to go back within the first Brillouin zone. So it basically amounts to folding the three-particle parabola into the first Brillouin zone. And that gives you this band, so to speak. So this would be the lowest band, and then the first band here, the third band. And of course, all touch when you don't have a lattice potential. So you notice that at the edges of the Brillouin zone, this is where the folding occurs. And so naturally, the band crosses if you don't have the lattice. But as soon as you put a small lattice potential, the degeneracy here is lifted by its presence. And you get a gap opening near the edges with a value that increases with the depth of the lattice potential. And as you increase the depth more, so here in recall energy, it's four. You notice that the band will get flatter and flatter. This is 410, whereas the lowest band is basically almost flat on this scale. So the gap are widening, and the band become more and more flat as the lattice depth is increased. So that corresponds to an increased localization of the particles inside the lattice well, as we are going to see in some time. OK. So here I'm showing basically the most simpler experiment you could imagine doing, which is you start with a BC. And then at t equals 0, you just flash the lattice potential. You switch it on abruptly. You let it on for some time, and you switch it off and look at the momentum distribution afterwards. And so what you see as the time on which the lattice potential is on increases, you see momentum peaks that appear, which are regularly spaced, by a multiple of 2kl. So that's a manifestation of this particular structure of the block state. And this is a coherent process. So as you increase in time, you see that it oscillates. And then at some point, atom disappear completely from the k equals 0 momentum peak. But then they start to come back. So there are coherent oscillations between all the values of momentum states that are basically coupled together by the presence of the lattice potential. And so that plot here shows basically the population of each of these momentum peaks at a function of time. And you see that, well, this oscillation is very well reproduced by the full band theory, which I don't go through into details. And so that already tells you that it's really necessary to take into account the quantum nature of the motion and this potential to really understand the behavior that is experimentally measured. Are there questions on this part? Or do I go on? So I'm not going to spend too much time on this, but I want to say a word. So now, we have discussed basically the behavior of one single particle in a paradigm potential. Now I want to show what's happening when you put both the quantum states, so n atom, all in the same single particle state. And you try to make it in an optical lattice. So this slide points out an intrinsic difficulty in using the normal method to obtaining both ancient quantum states. So this is a sketch summarizing what we do. So we have some normal, some trap with some finite depths, you note. And here what we use is the method of evaporative cooling. So you basically rely on the fact that atom collides and that these collisions will most of the time leave atom trapped. But sometimes one of them can produce these two atoms here. One of them can be promoted to an energy higher than the finite trap depths with the other one demoted to a lower energy. And so the result that this hot atom here is escaping from the trap. This other one is getting down to lower temperature. And now if the gas has enough time to re-thermalize, what this means is this atom that has escaped, escaped with more energy than the average. Simply because the trap depths here is very large compared to temperature, typically. So what that means is that the re-thermalization of the n minus 1 remaining atoms will result in a lower mean energy per atom. Or in other words, it will cool the system. So that's the same mechanism that happens when you blow the steam on top of your coffee cup. You just remove the hottest particle and then when the rest re-thermalize, well, they have an average less energy. And so this is a principle used to produce a PC actually. So you do this by lowering in times this u0 to adapt to the gas that's continuously cooling. No, not really, because then these hot atoms will just escape. And then at some point, it will hit the walls of the vacuum chamber. And then it doesn't bust off. It just sticks there. Well, of course, it can happen, but it's minor. There is a small probability for this to happen compared to the fact that it just escaped and sticks somewhere. OK. So this method works fine in an optical dipole trap like we saw last time. It stops working when we transfer gas to a periodic potential, OK? And the reason is that the bond structure prevents this to work. So imagine that you can prepare atoms, for example, in the lowest band, which is ultimately what you want to do. Then there is this very big energy gap here, which means that two atoms that collide in this band, apart from a very high order process, are not able to climb into the upper band or to the continuum here where they can escape the trap, OK? So once they are prepared in this lowest band, they basically are stuck there. And so the basic mechanism behind evaporative cooling is not working anymore. So what we instead do is to use an adiabatic strategy, OK? So we start by preparing in a regular trap where evaporative cooling works. We prepare a sample as cold as we can, so with as few entropy as possible. And then we transfer adiabatically, which means as slowly as we can, the gas from the optical dipole trap to the lattice potential. So essentially by increasing the lattice potential from zero to the desired value, and eventually removing also the extraordinary trap, OK? And so in the notes, I provide a few more details about how to do this adiabatically, what are the limits on the time scales that you need to apply this ramp of the lattice potential. I will skip that for lack of time, and I just show a result where you do things right and you transfer the system slowly. And so you end up with a current state basically in the lowest state of the lattice potential. So in the lowest band and in the quasi-momentum state with q equal to zero. So previously we saw that a good way to analyze the property of these gases was to apply a time of flight sequence. So this is what is shown here. This is the time progresses from top to bottom, OK? And what you see is that when you release a current state, you see a very striking interference pattern produced. So each one of these is one image. And you see a very striking interference pattern which peaks that are reminiscent of the backpings that you see in X-ray diffractions, for example. And so there are two ways to interpret this image. So one of them is an analogy with optics, let's say. So you can imagine this thing to be like a source of atomic matter waves. Each well of the lattice is a source of atomic matter wave which they can emit and they can propagate in space. But then since the source are regularly spaced, so when they propagate, then they can interfere. And they will just interfere constructively in some specific direction which correspond to these very sharp peaks here corresponding to accumulations of atoms in these directions. And on other locations, for example, here, then the interference will still be there but will be destructive. And so you will see a zero on this particular location. And so that's basically the atomic dual of what you do when you do normal optical diffraction. You send a coherent light wave onto a material grating which is a regular periodic structure. And then you observe diffraction with a regular spot in the observation plane. So here it's exactly the dual. So the diffracting structure is made with light. And the object that diffracts are actually material objects which are atoms. And it's only possible because these atoms, because we have a condensate to start from, there is a well-defined phase relationship between all these atoms that what you observe in the end is a mutually coherent interference of all the atoms taken together. All right, so the second interpretation will be following the band structure description that we had before, which I recall here, for the state at the bottom of the band structure. So zero band index, zero quasi-momentum. We saw that you could develop this state with some envelope function as a sum of plane waves with momentum qm, which are a regular multiple of the fundamental spacing 2 km. Or equivalently, which are momentum vectors in the reciprocal lattice. So now if you have a condensate, it means that you have n atoms that occupy that state. So the condensate wave function, as I've defined before, is square root n times these block waves. I'm neglecting interaction here. And now the momentum distribution is simply n times the modulus square of the lowest block waves. But if you take the Fourier transform of that wave function to get the momentum space wave function, what you get is just some envelope function times some delta peaks located exactly at the location of the reciprocal lattice. And so if you would generalize this to 3D, you would get a prediction for this type of pattern, where basically one peak will appear for each vector of the reciprocal lattice. So each multiple of 2 km in the direction x and y for this particularly simple cubic lattice. And so these black peaks are basically the hallmark of that you've done things right, that you started from the condensate, and then you managed to transfer this condensate ideabatically, and so to keep the coherence of the whole system complete over the whole, it's all extents, basically. OK. Are there questions about that part? All right. So I will now switch to a slightly different topic, where I will show that you can now exploit this particular nature of the band structure to do interesting things, such as observing blow oscillations and using that for application. I will come back to the topic of BECs in the next lecture, and then we will see that when you have no interaction become more important, let's say, you can change from a BEC to something else differently and end up with some strongly correlated states of matter. For now, I will just stick with a weekly interactive or even non-interacting description to discuss this topic. So the situation I'm going to describe is that of an accelerated lattice. So I imagine that the potential here is not stationary like we had so far, but it is centered at some position. Well, there is some overall translation, x0, which depends on time in general. So you should imagine that you have a sliding lattice here. x0 could also be sinusoidal, so it could be vibrating. Everything is possible. Now, this problem here, so it is time-independent. So we can still apply the band structure formalism we have discussed. Sorry, it is a translationally invariant, but it is not time-independent because of the fact that the lattice is moving. So using some unitary transformation, you can map this time-dependent problem to a time-independent one. While there, you will have basically kinetic plus the lattice potential, and you have an additional term, which is just a tilt proportional to the acceleration here of the lattice, which adds to the periodic part. So this problem is not periodic, but it's time-independent. And this problem is periodic, but it's not time-independent. So we have to choose. So it's usually conceptually simpler to work in the moving frame where you have a time-independent Hamiltonian. Okay. And there, the transformation that maps a lab frame here where the potential is moving onto the moving frame is this transformation. So you can see that this one is a translation of x0. Okay, so you just follow the potential as it moves. And that term here, proportional to x, operator x times mx0, x0 dot, this one is a momentum boost, okay? Exponential i is something x, shifts a momentum by something, all right? And this one is just a pure phase. So now what's amazing with this problem is that Bloch theorem can still be applied in the lab frame, okay? It's, in general, the Bloch function will be time-dependent and so on, but the symmetry arguments still apply. So that means that you can always have labeled the eigenstates of this particular guy by a quasi-momentum q and that q will not change because of the periodicity of the potential, okay? So in the lab frame, you can label the eigenstate with some n and some quasi-momentum q0. But now when you move into the moving frame, you see that this transformation will change a state with quasi-momentum q0 because of this term, it will be changed to q minus m x0, derivative of x0, the velocity of the sliding lattice, if you want, okay? So what this means, it should take that x0 is some, is accelerated motion, so it's given by a first, I'm t squared over two m, just standard uniform acceleration. What you have is that q of t will evolve as q0 plus ft over h bar, okay? And f here, the slope of this can be thought of an applied force, if you want. Or it's an inertial force, if you prefer, okay? So what's happening is that the quasi-momentum now, as you will evolve in time, the quasi-momentum will linearly increase and scan, be scanned across the Brillouin zone, okay? But we saw that the quasi-momentum is bounded. It has to do something when it, it's edges of the Brillouin zone. So basically when q is equal to plus kl, for example, as if you start inside the Brillouin zone and wait for some time. Then what's happening at this quasi-momentum? Then either, well, you can either make a non-adiabatic transfer and move to higher bands, which I will not discuss, or if the process is occurring slowly enough, what you would actually do is stay the lowest band, but then since you have to stay within the first Brillouin zone, you will undergo Bragg reflection, which means that the lattice will provide you with minus 2kl in that case, and the wave packet will be transported immediately to minus kl, or almost immediately to minus kl, okay? And so what this means is that as your time progresses, you are going to go through the Brillouin zone, so this is an experimental image. The initial wave packet is here. Time progresses upwards, so the wave packet, the center of mass of the wave packet is increasingly narrowly. Exactly when you arrive at the edge of the Brillouin zone, you see part of the wave packet that gets transmitted on the other side, okay? And then all of it is transmitted and everything goes on like this indefinitely. So you get oscillation, oops, I'm sorry, you get oscillations of the wave packet in momentum space, of present momentum space with a period which is given basically by the time it takes to go from one edge to the other of the Brillouin zone. So this block period here is 2h bar kl divided by the applied force, okay? So this was observed some time ago with non-degenerate cases. This is the case for this experiment. So you can see here that the average velocity of the wave packet is doing oscillations as a function of time and also as a function of the force. But you see here that this is the size of the Brillouin zone basically, and the size of the wave packet is relatively broad, okay? So of course this type of things are much more easy to follow if you start with a very narrow wave packet with prepared with a very well-defined quasi-momentum. And so with the BEC, that's now, that became very easy to do. And so I show you examples here where again quasi-momentum is scanned doing this blow oscillation across the Brillouin zone. And you see that the initial wave packet is here and then it's monitored as time progresses. So it bounces around each edge and then continues through the Brillouin zone, bounces again, et cetera. And so the experimentalists in Innsbruck were able to follow this motion, so this is the mean momentum if you want, where they were able to follow these oscillations for 10,000 of periods, okay? So this is 20,000 and something, which is pretty impressive. And even there you can see that the shape of the BEC is strongly distorted after 20,000 cycles, but still the oscillations persist and go on and on basically as long as they can keep atoms in their trap. That's relatively, that's really quite impressive. It was possible in this particular experiment because they were able to set the interaction to zero prior to doing the experiment. If you have interactions, then this motion is damped very quickly as we are going to see in the next lecture. But in that particular case, they were working with a non-interacting BEC and then essentially the dynamics goes on forever. This coherent oscillation goes on forever. Okay? So just a remark on basically what's happening to the wave packet inside the lattice. So if I take this accelerated lattice, okay? And I consider a wave packet, which has some mean value and some group velocity, Vg, which is given by the derivative of the dispersion relation with respect to Q, all right? So now when I move to the lattice frame, this Vg, if I approximate the band structure by a parabolic dispersion relation with an effective mass m star, then Vg will be simply given by m over m star times the velocity of the lattice, okay? This dot x zero I introduced earlier. Now going back to the left frame, this means that the velocity of the BEC of the wave packet I'm preparing will be given by the group, this is the group velocity, this is J index, all right? Will be given by the velocity of the lattice plus the group velocity, which is this expression, okay? One minus m over m star times the velocity at which the lattice progresses. So now it's interesting to remark that when the lattice is shallow, basically the effective mass is essentially the same as a free mass, okay? So this, the relation dispersion is very close, at least near the bottom of the band structure to the free particle parabola. And so what this means is that the velocity of the atoms, basically the lattice is sliding through and the atoms more or less stand still, okay? They are almost unaffected by the lattice that is dragged into the cloud. On the other hand, if you take now a deep lattice, the effective mass, the bounds become very flat, meaning the band curvature and the effective mass, the band curvature is small, the effective mass is also small. And atoms, and through this expression, you see that the BEC velocity becomes almost equal to the velocity of the sliding lattice. So in other words, in the deep lattice limit, the atomic wave packet is dragged along by the moving lattice, like basically a transportable, okay? And so that can be used to this possibility to accelerate like this, a wave packet, which is done in a coherent way, can be used for doing a lot of measurements. And I will give you an example now where it's applied to measuring the ratio H bar over M. All right, the motivation for doing this is that if you consider now the fine structure constant, alpha of QED, quantum electrodynamics, you can write it as this way, as a ratio of well-measured quantity, so the Riedberg constant here, the electron mass there, or the rubidium mass in that case, times H bar over M, which is the least well-known in this product, okay? And so people are very interested in trying to measure alpha and other related, QED related quantities in the most accurate way possible, simply because QED is so far the most precisely tested physical theory, as far as I am aware, okay? And so catching possible deviation from QED possibly gives into new physics beyond QED, well, maybe physics that can be understood from QCD, interactions with add-ons, muons, et cetera, or maybe all more exotic theory, for example, postulating an internal structure of the electron or anything else, okay? But so basically in meteorology, people, well, there is a lot of interest in trying to measure the fine-sluctured constant as accurately as possible, and the accuracy, as you will see, is pretty amazing. And so one way to do this was done in our lab in Paris, the group of Bir-Avain, by measuring this H bar over M, and then relying on other values for air affinity or for this ratio of masses, which are very well-known on their own, okay? So the accuracy, the limiting, factor limiting the accuracy is this one. So the way it's done is using so-called Raman spectroscopy, which is a process by which you can measure the speed of the atom, okay? So you start with an atom with two round states, which I call G1 and G2 here, and you apply, again, two lasers, but which are different wave vectors, K1, K2, and different frequencies, okay? And you tune these frequencies in such a way that the process, so neither of those two lasers are resonant, so you don't promote atom to the excited state. But the two-photon process, where you absorb a photon from beam one and then re-emit in beam two in a stimulated way, is resonant, okay? And so what you do when this process happens is that you transfer atoms from the state G1 to G2, but you also impart momentum just to ensure momentum conservation given by K1 minus K2, okay? You absorb K1 and you re-emit K2. So when you make, so when you enforce that this resonance condition happens when energy is conserved, you end up with a condition on the frequency difference of the two lasers given by this equation, okay? And which shows that if you now measure this resonance frequency for some momentum and PI and PI changed by delta K here, so if you measure those two values and you know accurately all the PI, delta Q, QR, et cetera, taking the difference gives you an experimental way to measure h bar over m, okay? And so all these quantities are basically related to optical wavelengths, which can be in turn related to optical frequencies, which are not extremely accurate. And so it's a very accurate way to get a value for the risk ratio of bar over m. And so now, the important thing is to be able to prepare a wave packet with PI, well known, and that delta K is well known and also as large as possible, okay? And so this is done using gain-blown oscillation. So this shows a snapshot as a function of time where you see all atoms are accelerated as they are dragged along by a strong lattice, okay? And the speed is basically constant, except when atom comes near when the quasi-momentum approaches the edge of the Brillouin zone, whether at this point you see the very steep change and this is in the lab frame, I'm not in the movie frame now, you see a very steep change where atom pick up a momentum to h bar KL, okay? And you should do this process n times, so you have n such steps, then you end up with a momentum difference of two n h bar KL and n can be large. So I think in the experiment they used, initially they used 50 and now they have, they can do n up to 10, 1,000 basically. So 1,000 times those steps momentum imparted to the atoms, okay? And so what this means is that you have a wave packet which is basically at rest, another one which is accelerated by a huge acceleration, this two n h bar KL, okay? And then applying this method, so this one goes quadratically with n and so this one goes linearly, so you can increase the precision with which h bar over m can be determined, okay? And so this is what they've done and with this method, they achieved for alpha, so here it's one over alpha minus 137 or three, et cetera, and the scale here is 10 to the five, so the relative precision is 10 to the minus nine, okay? And so with this method, you could achieve a precision which is similar, as you can see the error bar here, for standard between quotes, the highest precision spectroscopy ever done on a single electron where people have been measuring their anomalous magnetic moment, okay? That's on an aravard in the group of Gabrielle's. So the advantage of this, the Atomoptics method, let's say, is that unlike the spectroscopy method, it's completely independent of QED calculations, okay? It's only relying on measured quantities and then you take difference in our ratio and that's it. You don't rely on theoretical input only on measurements, okay? Whereas this method, to get the value for alpha, you need to basically measure the anomalous magnetic moment and then you deduce alpha from a formula provided by QED but which can be very complicated because it's like a 10th order perturbation theory or something like this, or 4th order, 10 is probably too much. So that's already interesting for, of course, purely fundamental reason to really get better knowledge of the fundamental constants and possibly deviations from QED. And I will finish by mentioning that beyond the purely meteorological applications, you can also find applications in precision measurements. And for example, there are proposals where you can use these flow oscillations and they are perturbed. So basically by measuring the block period, you have an access to the force and there are proposals and no experiments in this paper, for example, where it can be used to measure extremely weak forces. For example, the Casimir-Polder force that you, at an Atomoptics experience when it's getting close to a surface, dielectric or conductive. And so this type of, well, this type of measurement based on flow oscillations might find many other applications for measurement of extremely small forces that are otherwise difficult to detect. Of this one, Casimir-Polder, or I cannot say, I think it would be probably at a Newton or something like that, I would guess. Well, very quick in head calculations, it could be wrong. But well, those are extremely weak. They are weak, but they are amplified by the fact that each atom is more or less feeling the same. So your signal is not like one single atom. It's a bunch of them. And it doesn't actually matter so much that there are BC just as they feel uniform force over the extent of the cloud. Okay, so I think I'm going to finish here a bit earlier. Probably it won't be problematic. And so tomorrow I will discuss the case where interactions actually make a very big difference. Thank you all for your attention.