 So what interests me is navigation, which is one of very important functions that the factory system needs to perform. And in particular, I'm interested in navigation because this is how orders look like in the real world. They're not very simple. We've heard this concept several times during this conference. Turbulence is the norm in the environment. It's always there. And so finding the source of an order is a very complex problem. And so I'm going to tell you about behavior and physics and try to make a sense of how these two come together and try to put numbers into what's easy and what's complicated and why something is easy and why something is complicated. And I want to tell you right away that all of the experiments that I'll show, that's the work of David Goyer and Venki Murti. And this is when David was a postdoc in Venki's lab. So I acknowledge them and when I'll say we did this, it's always they. So please remember this. And so the outline of the talk is very simple. I'll give you a very brief introduction. And then I'll go into the data and then I'll tell you something about the physics of the order that we are delivering to these animals. And then I'll go back to data and try to make sense of what we see. Okay. So this is a very famous experiment that you probably all saw before. If you did, please complain so we'll skip to the next slide. I remind you what this is. I really like it so I'm very happy that I have to remind you. So this is 1939 and it's full month and co-authors. And what he did is he put a rat here at the start of this maze. And the rat would go into this circular maze and he would explore and get bored very quickly and then go on the other side and explore the arm and at the end of the arm there would be food. And he did this for a little while and then the next day and the rat comes back and what comes out is the following, what happened and more, whatever, I don't know, probably, probably. Anyway, in this case at least we learned something very important so in this case we learned something very important which is that when the rat the next day comes back and it explores the maze, the only arm that he was used to is blocked and so then what do you think the rat did? What arm did he explore first? But if you know the answer, don't answer it. Can you guess what do you think? Did he go to 9 and 10 because they are closer or bang they had very hard against this close wall, that's possible. Any other guess? Back to the start, interesting, anything else? Well, the answer was very interesting because the rats went to the arm number 7 with a very high probability which this was the very first indication that we have a map of space and this is, I think for a physicist, the fact that there is a map of space and then after this a bunch of behavior experiments and then we now know a lot about the neural correlates where this map of space is held, this is really interesting and in fact all faction is one of the ways we can construct this map of space, these are very recent results and so I really like to start with this but in fact navigation, for navigation we don't only need to know where we are in space, we also need to know where things are and make decisions as to what our best shot is to get to the place we want to go, to the target. You are not a rat, so rats are very smart. Why to the left? No, but he is much smarter because he knows this is a short cut. He is even smarter than you. I've got to say part of the reason why I think navigation is cool is because I'm really, I cannot do it and if Google map wasn't there I could never do it. So I understand that we may have problems but rats are really good and in fact moths are even better. We've heard this over and over and over. They are able to use these airborne plumes, so complicated plumes to get to the female, this is the male moth and the female is very far away and it's emitting very few molecules and in fact it looks like the moth is not only able to use information from this complex turbulent plumes, it's used to it so it's able to respond to it so if you give it some artificially smooth signal it actually gets confused. This is what's represented in this beautiful paper. It gets confused and it doesn't go straight and it starts to move around. So the very simple question that we asked. So there's a lot known about insects as we know, as we've just heard, very beautiful work and in fact what we asked the first time that we talked with Vanky and David is Rodents are able to use these airborne plumes. Somebody before said this is relevant for mice because mice use airborne plumes. In fact there are not a whole lot of evidence that mice use airborne complex plumes because usually the way we do the experiments are we deliver the odor right next to the nose or we don't even pay attention how we deliver the odor or we don't want to test directly this area. So we set out to ask this first very simple question and this is the way we decided to do it. This is a box, it's about a half meter, it's a cube and it's transparent and we have three locations, we remember have three locations, one, two, three. At each location there are two tubes, one tube delivers odor and the other one delivers odor. Sorry, odor and water. Okay so the mice are thirsty and so they want to get water. And so then we switch on one of the three sources of the odor. At each time it's a different one, it's a sort of random number and when we open the odor, the valve for the odor, we also open the valve for the water. So as soon as the mouse gets to the correct source of the odor, it can drink. Okay and it's thirsty so it's encouraged to do it as fast as possible. And we repeat this over and over and over. There is an inflow and there is an exhaust so the air motion is pretty complicated in this box and this is the matter of the second part of the talk. Okay so this is the structure, the temporal structure of the task. There is a pre-order period which is about half a minute where there is no odor, no water and then we switch on odor and water for 20 seconds. Okay so at most they'll drink for 20 seconds. This is the scheme of the arena and the question is do mice do it? And in fact, so this is a slightly different version of the arena where the odor and the water are not delivered exactly at the same place but for all of the data they're actually co-localized. So what should, yeah there are three sources, sorry, all of the sources are at the... So the flow rate from the vent is huge. It's one meter per second and it's 10 centimeters. And from below it's much smaller. It's like a few centimeters, it's not that much smaller per second, the velocity but the size of the source is very small. It's a few millimeters, much less. It's a few millimeters, the diameter of the tube. And so this is a movie of a mouse performing the task. And what you've seen is green dots appearing as soon as the odor went on and I'll show it again. So a green dot appears as soon as the odor is delivered and a blue dot appears as soon as the mouse gets there and it starts leaking. This is under infrared and the other thing that we did is we put the valves for opening the different sources very far away from the box so that they couldn't have auditory cues to decide. And it turns out that the mouse actually sniffs in the air and does the task and performs pretty well. And what you're looking at here is red trajectories are the ones that from location go directly to the right source and blue trajectories are ones that are a little longer and go check the wrong source first and get to the correct location later. This will be important for later. Okay, so let's quantify performance. This is the first qualitative way of quantifying performance. This is the first day so the mouse has never seen the task before. He doesn't know what to do. And what you're looking at is each line here is one trial and there is a black dot at each time that the mouse is at the correct location. The odor is on here and it's off here. So these are 20 seconds and this is the pre-order period. And so you can see that the mouse is at the correct source location independently on whether the odor is on or off. He doesn't know the task yet. But then eventually after several days of training, they get it right very fast. So you can see if you follow one of these trials, as soon as you put the odor on, they get to the correct source location in very few seconds and they stay there consistently until the end. So let's re-plot this information in an easily readable way. So this is the time at target as a function of days of training and we can see that the mice learn pretty fast how to do the task. This is the other number that I'll use to quantify their behavior, although there is no punishment when they get to the wrong location first. So there's no reason for them to get it right at the first time, but it will be important for the end. So already early in training, 80% of the time, they get to the correct location first in 80% of the time. Okay, so the answer to the first question is yes, mice are able to use these complex odors coming from the air. And the next question, of course, is how? What is their navigation strategy? How can they do this? And in order to answer this question, we need some physics. So these are numerical simulations. So what I did here is I simulated the Navier-Stokes equations, which are the equations that govern how air moves. And together with the Navier-Stokes equations, I integrated the equation of a lot of particles to see where they would go. They would start from the source and then they would go around transported by this flow. And this allows me to reconstruct what's the concentration of the odor at each point in space and time. And these movies are the results. Yeah, so what you're looking at here is this is a top view, and it's at mouse nose. So you have here is the inlet and here is the vent. And the source of the odor is at the ground. And I'm cutting this volume at the height of one centimeter where the mouse would be sticking. And here are side views. This is a crosscut into this plane, and this is on this plane. And what you see is that it fluctuates, which is the bottom line. You can do much more. You can be much more quantitative on how much it fluctuates. And this is what physicists like to do. Write down equations and find out, analyze these signals and see how the correlations go and the structure functions. And I'm not going to tell you more about how to quantify this signal, but I want to analyze why and how one is able to get to the source of the odor in this situation. So I'll tell you something about the physics which is related to this stuff. Okay, so the first thing, so we've heard even in this conference that mice are able to measure gradients. And this is very clear. We've heard Tim's talk before they're able to measure gradients into dimensions with a single slip. So how far can you go just measuring gradients? The idea is I took all of those snapshots of the odor at the height of the nose, of the mouse, and then I designed a very simple algorithm. I am a searcher. I start from a random location and I take a random direction and I go until I find the odor. When I find the odor, I move in the direction of the gradient. I climb the gradient of the odor and I do this at a constant speed and at a constant sampling rate. And this is the result. So you don't see here, but this is the whole, these are a way to represent the scalar, but it's almost white so you don't see it, but there is the scalar, the odor behind it. What you see is that the walker eventually, eventually gets to the source of the odor. Yeah, so I took numbers that would fit the physiological... Yeah, so everything fits like the volume of air that the mouse inhales corresponds to the resolution that I'm using for my examination. And the time scale, I've already said, it's sampling at a constant rate, which is physiological, measured. So everything is physiological. And the time scale that I allow the searcher to find the source is 20 seconds, which is the time scale that we use for the experiment. So it turns out, so if you look at these trajectories, you can see that the algorithm converges because it's a bias random walk. So I'm moving in random directions, but eventually I'm drifting towards the source. So we can put numbers on this and try to understand where is the signal and how effective this algorithm is. And in order to do this, well, first of all, when they encounter the plume, they measure the gradient right there, instantaneous gradient at that location, and then they follow the direction of the gradient. They climb of the order, of the order concentration. No, no, no, no, instantaneous, so they are spatially, it's a gradient, it's a spatial gradient. We don't know what mice are doing, but we know that they can measure, well, yeah, we don't know if that's an instantaneous gradient. You're right, you're right. So mice may be doing something else. It goes at a constant velocity and it samples at constant velocity. So the step length, it's at 7 hertz. If it was just a line, then it wouldn't go, it would be, yeah, it relies on the fact that it can measure gradient. So it means that the order concentration in the location where he is and in the nearby locations is non-zero. Okay, so it works because as you can see, as soon as you find the plume, then you do this bias random walk. And why do you do a bias random walk? Because the probability of moving a step closer to the source is larger than the probability of moving away from the source. And how can we quantify this? Well, this is, sorry, this was a quantifying performance of the algorithm, which is comparable with the performance of mice in between five and eight days of training. We'll talk more about that. But so back to why does it work? Let's quantify this bias, which is the reason why this algorithm works. So in order to quantify this bias, I focus on this angle. So what is this angle? If I'm a searcher and I'm here and this is the location of the source, that is the direction that I really would like to move. Okay? And this, let's say that I'm here at a certain time and I measure the gradient and the gradient points in this direction. If this angle is pretty small, that means that if I follow the gradient and I don't exactly go in the direction of the source, but I'm still moving closer to the source. These are small angles are good. They move me closer to the source. But if the situation is this one at the next time step and I follow the gradient, this is really bad because this angle is big and I'm moving away from the source. Okay? So this whole thing works if small angles are more probable than large angles. So let's quantify the distribution of these angles. This is the histogram for the two sources. Red is when the source is coming from the right and blue is when the source is coming from the left. They are not symmetrical, so the probability distributions are different. And you can see that, like qualitatively, that small angles are more probable than large angles. And now we want to put numbers and to put the number and quantify how much this bias is. We have to calculate the area below these two portions of the histogram. This area represents the probability that I move closer to the source following the gradient and this area represents the probability that following the gradient I move away from the source. The difference between the two is my bias. If this bias is larger than zero, then I'm in business. Okay? So I can calculate this for my simulations and it turns out that there is a bias that is nonzero and for the source coming from the left the bias is 0.18. I can repeat the same for the source coming from the right and the bias is slightly less, but it's still nonzero. And this is the correct order of magnitude to account for the time scale that I need to converge in this algorithm. So this is the reason why the algorithm works. And this all seems very nice, but if you've heard anything about navigation interpolates, you should be thinking why does it work and in fact it doesn't work at 100 meters different distance. It only works close by, close to the source when I'm essentially within the plume. And this is the way I quantify this property. This is the bias now as a function of the distance from the source. And the bias is positive and large when I'm close to the source, but it decays pretty rapidly actually as I move away from the source. So the algorithm will fail at about 30 centimeters or 40 centimeters from the source. I just don't see it because my box is half a meter inside. But we can still learn things from this simple example. So how about this algorithm? Can I make it better in a simple way? And if you talk to anybody who does a stochastic optimization, they'll tell you just average because averaging improves the rate of convergence of stochastic subgrading algorithms. And so this is what I did here in an idealized case. I took the average over a window of all of my snapshots and I took the new frame which is now much smoother than the original frames and I computed the algorithm using this new smoother version of the order. And what it turns out is that performance does improve and it's noticeable. It's not huge, but it is a good improvement. Except mice, if they wanted to average, they would have to compute this average. So nobody gives them the average which is what normally happens if you do stochastic gradient algorithms on data sets. You can use your data sets as much as you want. But mice would have to sample the order and pause and stay there for a little while for their time window and average the signal. And so if you do that, then it turns out that it's never good to average. So the message of this slide is that it tends to be much better to just get a signal, make a move, and it's never worth it to try to make it better, to make it more reliable. And so this concept of getting the signal and move is really important and it turns out this is what limits performance in this algorithm. And we can give a name to this getting a signal. It's called intermittency which is what I'm trying to explain in the next slide. So this is a condensation of two movies. One comes for the source coming from the left and one from the right. In each box there is the time series of the signal at that location. And it doesn't matter what the numbers are so I didn't put any number because I don't care right now. I want to show you a very simple qualitative property of the signal. So this is roughly where the source is and let's look at the signal in these boxes right downstream from the source. So I magnified them here and what it turns out is that the signal there fluctuates a lot. It goes up and down but it's always there. So I always get the signal. But if I move away from the source now I'm magnifying the signal in these boxes that are far away from the source. They are actually at the boundary of the plume. What you see here, these are vertical lines and vertical lines in the log lean plot mean that the signal is actually zero. In all of the time intervals in between two vertical lines the signal is zero and this is intermittent. It means that sometimes you get the signal but then there are long periods of time where you don't get the signal. And this is what really hard in navigation in turbulent environments is that several times you don't get anything so you don't know what to do because you don't get anything. It's not the fact that it fluctuates because there is a little bit of signal when it fluctuates it's just when you don't get anything what you do. That's what makes it hard to navigate toward in turbulent environments. And in this simple environment I can prove that this is what limits performance of the algorithm. I can tell you that the two sources as I said are slightly different. They're not symmetrical with respect to the center. This source is slightly closer to the midline. So it is more turbulent, more complicated but it turns out that the plume that it generates is larger. So the first encounter happens earlier than for the other source. So notice that the bias towards this source is smaller. So the gradient in this case is less reliable but I get the signal much earlier and I have more times, I have more moves to get there and this makes this source easier to track than this one which is what I'm saying here. This is harder to track although it's more reliable and the reason is that I get the signal later. So intermittent is really what's hard in turbulence. Okay, so now let's go back to the data and as I said before, so the performance of the algorithm is similar to the performance of the animals in this stage of training. They're not naive but they're not at their best. But then the curve keeps improving, they might get better and why do they get better is the new question. Turns out that they get better because they cheat. They start to do something that we didn't ask them to do. So the first thing that I wanted to show is that they get better so they get faster at locating the source of the odor but they get it wrong several times, much more than before. So late in training means between 14 and 18 days of training, they're very fast because they are at the odor source for an average of 16 seconds. They get there on average in four seconds but they get it wrong several times. They check the wrong location first in a high percentage of the trials. And the other thing that's interesting is that late in training they stop thinking, they just go like crazy, their velocity increases and the number of poses decreases. So we thought about this for a while and then we came up with the following. So we analyzed a little bit the trajectories and it looks like the trajectory early in training is much more weekly and the trajectory late in training is much more smooth and they are sort of less complex. This is just a way of saying that they explore less space. And the idea that we got is maybe early in training, they have no idea where the sources are, they have to rely on the odor to locate the source of the odor. But late in training because the sources are in those three locations, then they know where the locations are, they just go in circles and they check all of the sources sequentially until they find the right one. So the model that we had in mind is the following. When early in training, when you have no prior and source location, your trajectories have to span more of the space and you have to be more careful, you have to actually sample and use the odor plume. Late in training, this is not what happens. They switch completely behavior and they just go from one location to the next sequentially. This would explain why they don't pause anymore because they don't need to, their trajectories are stereotyped and it would also explain why their performance improves in terms of time because they can go like crazy. They don't have anything to compute, they just go like crazy and it will be very fast. During, yes. So I think what you are asking is this, this is what you are asking. So if the odor is correct, then we would predict that if we train another set of mites on another set of locations and then we move, suddenly move the locations, that means that they are trained, they know the association between odor and odor, but suddenly they have no prior and source location anymore. And if this is the case, then they should switch back to a behavior that's more similar to when they are early in training. Is that what you are asking? Okay, so that's what we did. We trained this other set of mites on locations A, B and C and then at 18 days of training, when they were very good, the blue curve is how long they stay at the correct source location, late in training, they moved the locations to 1, 2, 3, and we test them again. And the performance drops just a little bit. Okay, but let's see about the behavior. So this is how it looks like. So I wrote prior in big when late in training, all of the behavior is expected to depend on the prior and prior is very small when all of the behavior should depend on the odor. And our prediction is that early and moved because there is very little prior should be similar and late should be different. Okay, so as I said, late in training they are careless and this is the new set of mites and they are careless as the previous set of mites, but as soon as we move the locations, they have no idea where they are, they are careful again. Okay. This is the same data represented with colorful trajectories, so there is no new content here. And this is the last number that I wanted to show you. Late in training, we said they don't need to pause at all, they can go very fast, but if I move the locations to unknown locations, they don't know where they are, they cannot check them sequentially, so they have to pause again and to go slow again. And this mode condition looks very much like this early condition in both the velocity and the pauses per second. Okay, so I think I might have a figure later. So if you look at the probability distribution of presence, it looks like for at least the first couple of days, they go check the old locations and then eventually they forget, they understand that the locations are new and they give up the old locations. It takes a while though, it's not immediate. So if this is the situation, then our algorithm should correlate well with the early and mood behavior, but not with the late one. Okay, late one should be completely uncorrelated, it should not care about the auto at all. And so this is the analysis that I want to present here. So this is the performance of the gradient climbing algorithm as a function of the location of the start. So it's represented in color code. When I start very far away from the source, I'm only at the target for a few seconds, but when I'm very close to the source, then I'm at the target for a long time, almost 20 seconds. And these are the performances of the two data sets that I just told you about the early and the mood conditions in these two conditions, the prior on source location is very small. So I expect this to be well explained to correlate well with this algorithm or better. And these are the results. So I show you two numbers here. This is the difference in performance tracking the right and the left source. And as I said, the two sources are different because they are not symmetrical with respect to the center, but this only holds when you actually care about the order. So these three data sets care about the order and all of them track easier the source on the right than the source on the left because of the reasons we discussed before. And the other number I wanted to show is the particle relation coefficient between this map and this map. And there is some correlation. Now if I add this data set, this data set you can tell already qualitatively. It's qualitatively different from the other data sets. This is late in training and the performance does not depend on whether you're getting the order or not. Even the symmetry is circular. It's because they are running in circles. So that's what defines the performance of these animals. It's the distance from the source in these trajectories, in this space. And in fact, this data set does not show any preference to the right or the left. You track as easy the right and the left because they are on the same trajectories. It doesn't change anything. And this data set is much less correlated with the early and mode condition, the gradient climbing condition. So now to test further this idea that late in training they don't rely on the order but they rely on their prior, what we do is we design a new algorithm that is based on the following. We took the foraging trajectories when there is no order around. And because they hang out more around the location of the sources, we take these trajectories as proxies of their prior onto source location. So we take each of these trajectories and we ask how long along these trajectories it takes to get to the correct source location and we plot the map of performance as a function of where I start. So instead of taking the gradient climbing trajectories we just take other trajectories that are just a proxy of their prior source location. And so this clearly doesn't care about the order at all and it cares a little bit about the prior. It only cares about the prior. And now it turns out that this again shows no preference towards one source or the other because they are completely symmetrical and the situation is now reversed. If I try to correlate this new algorithm with the early and moved conditions that are not based on prior, they are based only on the order, it turns out that the correlation is much less than with the late condition that is based on prior. This is a complicated way of trying to test our ideas. I don't know if I did a good job in explaining and you feel free to ask questions. The idea is just that either the prior or the sensory input are important for performance and if you design an algorithm that's just based on the sensory input it would correlate more with the early and moved conditions where they know nothing about source location. And this is what this is testing. So this was very short. So the conclusion of the talk is first that mice are able to use these complex order plumes to locate the source of an odor. Second, that although there is turbulence and the signal is fluctuating, this task can be solved, can be accomplished with a point-wise, instantaneous measure of the gradient. And the reason why this is true is because we are close to the source. We are in the plume. And so we don't have so much problem of finding the plume and getting the signal because we almost always get the signal. Okay? And so then the next idea of the talk was then if you leave the mice too much in this environment, they try to find shortcuts and they don't want to do this complex computation and using the odor and computing who knows what. And they start to switch into habitual behavior but doesn't care about the sensory input anymore but only about their prior on source location. And so I wanted to conclude with what's next. And one of the obvious things that need to be done is move to a larger environment and use sources that are not predictable. And this is to challenge the mice because I believe that they are able to do much more than what you are asking them to do and this needs to be tested. And the next thing is if we do challenge them in a much more complex environment where the instantaneous gradient is not enough, what do they do? What do they measure? What are they able to store in terms of information about this complex signal? There's several things that a physicist could think about. There's the signal, there's correlations, there's time series, there's averages. What do they store? And we know and we've heard it over and over and over that the capacity of the nervous system is huge and the brain can store pretty much anything in terms of number of bits. And so the question is what is really relevant for complex navigation. The last thing that I wanted to mention is that we observed that this pauses and it doesn't seem that pausing just for averaging is useful. And so a question is why would they actually pause? Why would they not keep moving? And the answer might just simply be well they don't know how to make decisions while they move or just that they have to take time to sniff and process the information or maybe there's some cognitive load related to making computations that we could actually quantify in terms of numbers of pauses. And with this, I thank David and Venky especially and Vika Nane who helped with the experiment and I thank you very much for your concern.