 So I'll present today a few results I obtained while being at Janelia Farm in Virginia on how a few animals represent the sense of direction at the neuronal level. So recently I moved to Marseille, but that's the result I obtained in Janelia. So the first example I'd like to show you has been already introduced by Reiner. Sorry, on the left or on the right. So in this example, there is no predator, but still I'll summarize it quickly because I think it's fascinating. So the nest is here. And the bumble bee is released in the wild. And the experimentalist has placed five flowers, fresh flowers, in the environment. And the bumble bee here in blue is the first strip. Explore the environment. Sorry, that's a bit annoying. And so you see that the first strip is a bit random. It explores all the different places. And then in black, it's after 25th trips only. It manages to find the optimal path through the five flowers. And then after more trips, it still does some random exploration. But I find it fascinating because it means that the animal, which is an insect, managed to have a long-term memory of the places of the flowers and managed to do some computation on it with only 10 to the 5th to 10 to the 6th neuron in the brain. And another example from the desert ant. So also this desert ant managed to go out of the nest, also explore to forage, find a source of food. And it's very hot outside. And it managed to path integrate and go back on a straight line to the nest. And it's a desert. And in this case, there is no real visual landscape. And so it's really that there is a sense of direction in the animal that make them go to the nest. And so this is shown by placing obstacle on the trip back. So you see that here with the food, it tries to go in the direction of the nest. So it cannot see it. It goes against the obstacles and goes around it. Again, try to go directly, explore, and then go back to the nest in straight time. Also if you have question, please don't hesitate. So it means that really these animals manage to somehow integrate angles, also distances, but at least angles. And so I have to again change the animals. So I'll switch to the fruit flies, the Drosophila, because it's the animal where we really have the best genetic access to the neuron. And so the other experiment is the following. So it's the equivalent of the Maurice water maze that people do for mice, for flies. So this is performed also in the group of mycorrhizae at Janelia. So you have a Peltier array here in this arena so that the temperature can be controlled everywhere on this arena. But at this, and these flies, they prefer cold spots. Okay, so naturally they will tend to, they will try to stay in this square here. But at the same time, you have a visual arena around the arena and with some features. And so in the experiment, when the landscape is turned, the cold spot is turned as well, so that the animal can in principle associate a visual landscape to a particular place. Okay, and that's what it manages to do. And so here, this is the experiment. So you see a few flies. So there is no social interaction, but you will see that it's just better to see more at the same time. And so here is the visual landscape and the cold spot. And so you will see what happens. And here is the mean time to reach the target. So you will see first it's run down. Then they go to this spot. Okay, so now it turns and goes here. Some are more lost. I mean, it's like we could make some anthropomorphic comparison. And so you see that almost all of them are quicker and quicker. And so the time to reach a target decreases so that they really manage to associate visual landscape to a position. And what is great with the flies is that you can really do, you have good genetic access, and so you can screen many of the neuron. So you can inactivate some of the neuron and see the phenotype on this behavior. So that's what has been done. And they saw that to perform well the task, the flies need neuron in the mushroom body, which is a place for associative learning, but also in a place central to the fly, which is called the central complex, which has a ring-like structure in the middle here. And so the idea is now, if we want to understand how the circuit works, we will try to image the neural activity in this neuron during a task. And so yeah, so this organ is really like a central structure in the middle of the brain, something that is a place of consciousness for the flies. Anyway, I won't go that far. And in this structure there are about 3,000 neurons and again we have good genetic access to it. And the experiment is the following. So there is a fly tethered walking on a ball while we image with a two-photon microscope, some of the neuron in the central complex. And there is a possibility to display again a bar or a visual landscape around the fly. And that's what they see. So in this particular experiment, the fly is totally in the dark. So there is no absolute cues for a direction and it's walking on its ball. And so here you see the calcium activity in this central complex. And so you see that at the beginning there is a localized activity on the ring at this position. And so you will see what the fly does. So it's walking on the ball and okay, maybe I can let you observe what's going on. So you see that the fly walks in the ball. So what happened is that with its leg it manages to turn the ball. And when it does that, actually the bump of activity turns around the ring. And quantitatively in this graph, this is the, in blue, this is the accumulated rotation of the ball, okay, as a function of time. And in red, it's the same thing for the ring here. And we see that it agrees very well so that the bump actually follow the rotation of the, I mean, relative to the external environment without having any external cue at any time. So it's really integrating small movement of the ball. So from this, there is really two questions for me. First of all, I mean, there is no external cue to the fly. So how can you generate a persistent activity for such long time scales? Several tenths of seconds. And also how is the activity rotated around this ring? And so for the persistence of the bump, I mean a few models have been proposed before actually in mice whereby electrophysiology people observed at a single cell level some similar activities. But so they observed that for some neurons if you record the activity while the animal is turning its head, it will be active only when the animal has a particular orientation direction with respect to the environment. And so to model that, there is a typical recipe that you can use to generate a local activity. It's so you place a few neurons on a ring. So I mean on a ring because the connectivity of the network is supposed to have circular symmetry, but nobody really imagined that you had a ring in the brain of the mice. And then you connect as follow. You excite your neighbors and you inhibit at long range. And this way you can with a good nonlinearity, you can generate localized activity. So that's how you can model that. So you have an activity F as a function of time and as a position T down the ring, and you can couple it with this and with a nonlinearity phi and a convolution product for the connectivity so that you respect the ring symmetry. And so we'd like to check if that's really the mechanism at play in the fly. And so to do that, you need to perturb the activity of the neurons. The thing that was not possible to do in mice. So here is the ring, okay? And you see the chemograph here of the activity. So along the ring, so the position T times unwrapped here and you have the activity as a function of time. So the bump here is here first. And then there will be for a short amount of time an optogenetic stimulation of some of the neurons. So you will see here, so you have the bump here, then some optogenetic stimulation. And then the bump here, as you see, remains as a new location even after the stimulation. So it means that this activity here is not a mere readout of another more internal activity that we are really touching, the circuit that is responsible for the persistent activity. And then we can study it a bit in more detail. So we can explore potential connectivity of the network. So people have looked before our cosine connectivity where you excite broadly and you do any bit also at long range quite broadly. That you can also study analytically the behavior of the bump in such a model. But there is another possibility that you have actually connections that are extremely local and a flat inhibition at long range. So here this model is extremely local. And again, only the inhibition is long range and totally flat. So we explore these two possibilities. And so here is a model of how it works. So this is the equation. And you see that in this model, which is local, you still, thanks to the diffusion term, you manage to get a broad bump here. And here there are some input, but you don't need input. And again, here is also the chemograph. And so you see that you switch your position of the input, then you can switch your position. And once it disappears, you keep the same, I mean, a persistent activity similar to what we observe. You can quantify a bit of the network as a function of the parameter alpha, the diffusion D, and the strength of the inhibition gamma. And for some of this parameter, you have a bump that is here. And then you can study how you switch the position of the bump. So here, in black, you have a bump here at zero initially. And then you put some localized excitation at pi, and you try to see how you can make the bump switch position. And you record the strength of the input so that you make the bump switch. So there's a minimal input you need to make the bump switch. And you compare the amplitude of the bump with the stimulation, the limit stimulation that you need to make the bump jump, and the bump without any external stimulation. And this is what is represented here. So for most of this diagram, you find 1.4. So something not too different. So we made these experiments with optogenetics, and this is what we obtained here. So this is a distance from the bump, and we see that, first of all, the strength that you need to input to make the bump switch doesn't depend on the distance, so it's flat. And also, it's comparable to the values that we predicted. So we think that we really have a model of local interaction for this bump, and not something that was broad connectivity as was proposed before. But so now we'd like to understand a bit more how this bump rotates around the ring. And again, there was something already proposed for mice. And the circuit is as follows. So you have the main circuit with neurons, the ring circuit as shown before, but you connect it to two other rings. So one ring here that connects to the right in blue, and one ring here that connects to the left, okay? And the way to make the bump move is to stimulate one internal ring or the other, okay? I'll explain that better with the example for the fly. So in the fly, what is great is we have access to many of the neuron. We have access to their connectivity, also genetic access so we can measure the activity of these neurons very precisely. And indeed, this was published a bit, I mean, some years ago also at Genelia. There are at least one other cell types that looks a bit like the internal ring of the model I presented before. So this neuron here that I call the central ring neurons is the neurons that were measured before in the experiment of the optogenetic switch. But there is an additional neuron that connects as follow. I mean, so this is really how this is positioned in the fly. The connectivity is simplified here in this diagram. So you see that you have a ring of central ring neurons. They connect to two other lateral rings that I call left ring neurons and right ring neurons. And these neurons, they connect back to the central ring neurons. The left ring neurons they connect here to the right and this one they connect to the left, okay? So a bit similar to what was proposed, but just notice that here in this model there are no internal connection to the central ring neuron. So only the interaction between these two rings and this one is actually sufficient to generate them. So that's what is simulated here. And so, so again, this is a chemograph and this is a central ring activity that will move as you stimulate one left ring neuron or the right ring neuron. So you see that there is an imbalance between the two side. You manage to move the bump in one direction or the other. And just notice that there is a phase difference between these two activities. The activity in this ring or the activity in this ring. And so that's what we quantified a bit more precisely. So these are the bumps. And so if you look at this bump here and you stimulate this ring stronger, you will see that the input to this ring will be stronger on the right side compared to the left side. So basically the input to the ring has a phase advance compared to the bump. And now if you compare, if you look at the neurons here, it's actually the contrary. The neurons here have an advance to the activity in these neurons. And so you have opposite phase if you measure, if you compare the activity of the inputs and the activity in this ring or in this ring. And so that's what is plotted here. And the experiment was done and that's actually by two color calcium imaging. We managed to show that the phase are exactly like predicted by this model. So we think that this is really the architecture of the neurons that allows to do the integration along this ring. And one thing that is predicted by the theory but which was not really measured in the system is the fact that is a relationship between the input rotational velocity. So basically you give an input to the circuit and you try to see how, what is the speed of the rotation of the bump itself, right? And for a good integration, you need to have an agreement between what you give as the input and the rotation of the bump itself. And since the neuron has a finite time scale, as it leaves a bump, cannot travel at infinite speed, right? So we know roughly the time scale of the neurons and actually predicts a saturation of the velocity of the bump of about 200 degrees per second, okay? So anyway, it cannot go faster than this. And this architecture, but this is non-trivial. It's not true for any architectures, allowed to have a good agreement between the input and the bump velocity for a large range of velocity until the saturation. So we think that this architecture actually performs very well. And indeed it follows, I mean, the integration for noisy input follows well what you, I mean, a random trajectory. Okay, so this is it for the fly. I don't know if you have questions at this point, maybe it's good, but otherwise I'll continue. So I tried to extend this to navigation in 3D. So flies don't seem to rotate 3D in 3D when they fly. They seem to be still horizontal, except maybe I take off and landing, but during their flight they seem to travel quite horizontally. And from the circuit, it doesn't seem that, I mean it seems that they only integrate angle in a 2D plane, right? But maybe for some, from other animals is not the case. And here is for a bat. For a bat you see that they have a very complicated maneuvers. So you see that their head explore many angles in 3D. And that's also true during flights, right? And then if these animals are also able to angle, integrate, then mean that you need a more complicated model. And indeed in the group of Narum Ulanovski, they show that some neurons, some head direction cells in the brain of the bat are actually tuned not only to the 2D azimuth, but also to the pitch and the role of the head. So this is a parameterization in Euler angle that we don't like too much, but it shows that at least you have localized activity for these neurons in this space. So then how to generalize this 2D bump attractor to this 3D rotation. So there is no real theory for that. But so if we want to have a continuous attractor, I mean a bump of activity in this space, you need to understand really what is the space of all the 3D rotation. And it's not a completely trivial space, I mean also it's well known. But here is a bit, it's structured that I show here in what is called axis angle parameterization. So imagine an object or it can be a head, but it can be anything else that you rotate in 3D. So the way to obtain any kind of rotation of the object is to turn the object around an axis that is given by a sphere in 2D and an angle that is given by, I mean a number. And so this is parameterized as follows here. The direction, I mean the center here represents the reference orientation of the object and the direction from the center gives the axis of rotation and the distance from the center gives the angle of the rotation. So if you see here in this example a rotation around the z axis, okay. So you see that the object turns here. So it goes up and then now if you rotate in the other direction, it goes down. But if you noticed well this object here is exactly in the same position as the previous one. So that actually this point on the ball and this point represent exactly identical rotation. So the structure of all the rotation is actually a field ball whereas the point that are opposite on the external sphere, sphere are identical, right. So we call them antipodal points. That's a bit how you can embed this space in 3D. And when you represent some of the tuning curves, localized tuning curves of the bat in this space, you obtain some activity like this where you have a small bump in this ball. So we'd like to build a continuous attractor for this network. And so the question is if each neuron represent a particular orientation in 3D, so you will feel the, I mean each neuron will be associated to a position in the ball, I mean parameterizing all the rotations. And you need to understand how you will connect the different neurons in this space. And since these neurons are on the sphere represent identical rotation, you need to also connect them. So you'll try to have local excitation and long range inhibition, but close to the border, you will also excite the other side, right. And so I'll summarize a bit how this continuous attractor were obtained in the ring. So one way with this cosine coupling that is shaped like this is you have a kernel here that you integrate with the activity. And so one way to decompose the kernel is to decompose it in a 4-year series, right, right. And what is nice in the 4-year series is that these harmonics are what is called ideals with respect to the convolution product. So what does it mean? Is that if you convolve a J0 plus J1 cos theta with any kind of function, you will obtain a function that is some constant plus another constant time cosine theta plus sine theta, right. You remain in the same space. And since here we use linear rectification non-linearity, it means that we solve exactly the stationary state here because here we know that we'll have a cosine function and so this will be just a three-shoulded cosine function. So it's easy to solve this system. And if we want to do the same with the space of all the rotation, you can integrate on this group of rotation and you also try to do the same but you try to find a good connectivity on this space. And to respect the symmetry, you can also write it as a convolution and so the connectivity between J and the rotation J prime will be a function of JJ prime minus one, like a good convolution. And actually you can also write Fourier series in this space and you decompose your kernel in this series, first a constant. And then this is a bit more complicated than on the ring but not so complicated. So the first harmonic is written as a trace of a matrix J1 times the rotation matrix of your element. And so again to have a bump, you can stop at this term at the constant plus first harmonic. And then you show that you have steady state. So you use this function and you have steady state and the space of steady state is a bit more complex in the ring, more diverse. You can have bumps of activities but also lines. You can't really have planes but for some reason they are unstable. But you have a greater diversity and so they are interesting to study for themselves. And we can try to compare these tuning activities that we obtain that are symmetric with what was obtained experimentally. And actually if we represent the symmetric states in the parametrization of Euler angles, you find a great diversity and actually they compare quite favorably with what you can find in the experiment. So that's a good sign. And recently, I mean even more recently, similar head direction cells in 3D were found in rats and also in mice. So you make the animal walk on vertical walls and you look at the tuning of some cells when the animal rotates on this plane or on what is called the west wall here on the side or on the other side that they call east wall. And you have, you record the tuning of the cell in the plane of the wall and you observe that the direction shifts as they change wall. And that's also what we manage to have with this parametrization of 3D. And that's non-trivial prediction in some sense. And so now we'd like to understand how we can integrate angles in this 3D space which is not trivial because as you might know, 3D rotation is a non-commutative structure so integrating in it is not necessarily trivial. So what we did for the 2D ring is to connect asymmetrically two side rings to a central ring. So we can do something a bit similar for 3D rotation. So instead of having one 3D bump attractor, you copy them several times and since this manifold is a 3D manifold, you need to copy them as much as the number of dimension. So you need six copies and you also couple them asymmetrically. So this bump here will couple a bit on the side and this bump will connect on this side so that if you have a higher input on this ring, you will tend in this space to go to the right. And that's what happens here. If you couple it, you manage to make the bump travel here. And since the whole connectivity is invariant by translation in this space, I mean in the space of 3D rotation, you are sure that you will take care of the non-commutativity of the group and integrate correctly the angles. And so that's what animals perform for navigation, but it's also related to a nice psychophysics experiment that was done in the early 70s. It's a fabulous experiment by Shepard and Messler. And so it's really for humans, so it's psychophysics and the experiment is the following. You ask a subject to identify a rotated version of an object. So you ask, is it the same object or not? So you can try for yourself. So for this one, I mean, are these two objects identical up to 3D rotation? So yes, I see. And for this one, no, OK. So there is a bit of variance of the reactivity time here. It's late and we are close to the break, so I'll try to be quick. But in reality, actually, the variance is very low. And what is fascinating is that the reaction time just depend on the angle of the rotation between the two objects and not the axis of the rotation. So it's really invariant to the direction of the rotation but only the angle. So it really means that the parametrization of the rotation in our brain doesn't seem to be like Euler angles but really like direct 3D rotations. And so we have a simple modelization of that. I mean, it's a bit trivial, but still it's really trying to show that in our brain we really have implementation of this rotation in 3D. And actually, when you ask a subject, I mean, how did you identify the object and maybe you tried yourself, you really tried to rotate the object in your head, right? So that's probably why the reaction time scales linearly with the angle. And so here, OK, we suppose that our object here is in one place in the space of 3D rotation. You have an object to another position. And so when you display the other object, the bump will travel to the new location and you can compute the time it takes for the bump to reach a new location. And this scales almost linearly with the distance between the bump. And if you do the same with Euler parametrization, the variance is huge because it depends on the point where, I mean, on the particular orientation of the object and it doesn't seem to be linear. So that's something that could be also studied at the neuronal level, but that's working in progress. I mean, in the mice, if you are curious, I can tell you about that. But I'll just thank you, my collaborator. So as I told you, I moved recently to Marseille. So the lab really is in its infancy. But so the work on the fly central complex was performed by Tsung-Suk Kim and then Evan Steerner and Stephanie Vegener in the lab of Vivek Jayaraman. And for the 3D head direction cell, I work with Alon Rubin from the Vahid Main Institute and Sandro Mani at Janina. So we have a few exciting projects starting in the lab. I mean, it might not be as nice as in Trieste, but it's still a nice place. It's not so far from the Calanque in Marseille. And we have PhD and postdoc positions available. So I'll stop here if you have questions. Thank you.