 managed to attract, which it's really quite an opportunity to be part of this conference for these last few days. And I appreciate the chance to give a talk. I'm also happy to visit. This is also my first time at ICTP. And it's a beautiful place. And in a world where, as Araviy was saying, in a world where inequality is only growing, I think it's super nice to see institutions that try to take that on different ways. So I think today is almost at least predominantly C. elegans. Araviy went first and got to introduce you to the organism. So I'm not going to spend that much time on it, on the background biology. But for us, this has been an important model system for thinking about behavior. Before I get started, I'm sorry, there's a little bit of separation between you and me, because I want to use the blackboard a little bit. And so I'm going to sit behind here. Makes it a little awkward here. So this is my group. We have kind of an unusual construction where we are super posed between Europe and Japan. Turns out to actually kind of work. And in particular, the work that I'm going to talk about today is sort of spearheaded by really two very talented theorists, PhD students, Tosef, who just graduated in Okinawa and will first sort of try to take his theory and really make it work with Mezen and Araviy Samuel. But he'll be around and that will only be a beginning project. So both of these students, if you're interested in contacting them for postdocs later, they're very talented. And Antonio, who published his first work this year on building local models to understand sort of dynamics, either in behavior or neurons. And he's a PhD student with me in Amsterdam. But our group as a whole, so these two for today, but our group as a whole is sort of roughly evenly split between Amsterdam and Japan. Just to give you an overall sense of kind of our trajectory, where I'm coming from, my background is quantum gravity, actually, where there are absolutely no experiments, no data. So why not go to the opposite end, where there's only data and no theory, the first order approximation in biophysics. But there's lots of different scales in biophysics. You can obviously work on molecules and pulling on DNA. You can work on how cells work. You can work on how neuron systems, groups of cells work together, like the brain or like tissue. Then, of course, there's single organisms and collectives. And I got very interested in this outside scale, whether it's single organisms or collective, because compared to all of our molecular advances, what we can do molecularly in living systems is just outstanding and amazing. But when it comes to how genes, how cells, neurons, networks, muscles, how they all come together to do the thing they're supposed to do, at the end of the day, your evolutionary success is predicated on how you avoid predators, how you mate with hermaphrodite. All this comes together on the behavioral scale. And until very, very recently, our ability to sort of quantitatively interrogate that behavioral scale was really very asymmetrically lower than our ability to interrogate smaller scales. So colloquially, I described this as if we're going to do biophysics, can we do biophysics with the level of precision that we're used to on molecular and cellular scales, can we do that on the scale of a whole organism? It's clearly a daunting task, but it's sort of our goalpost. And there's a lot of techniques that come into play. So we think about, well, think about C. elegans. Here's an example of a shape of C. elegans that we're very interested in. I will describe a new experiment that we're building to interrogate social behavior. We also think a little bit about large collectives of animals, in this case, a beegas, beehive. And all this, of course, feeds our interest in how do we build dynamical models of the data? How do we think theoretically about what produces behavior? Before I get into C. elegans and a few of the other systems that we talk about, there's a beautiful line of reasoning that I think sets up examples that we would like to do, but can't yet do yet in more complicated organisms. And this comes from the study of single cells, of E. coli. So actually, it's a little bit out of order, but you might imagine, so these two plots, this is a three-dimensional trajectory of a single cell E. coli, moving in a volume. And this is a later version of this, but earlier versions were measured by Howard Berg. And it gave rise to these models of run and tumble, the idea that a sort of taxid strategy where you go in the same direction, measuring your sensory input. These cells have sensors on the surface of the cell. So measuring local concentrations of chemicals and using particular algorithms to then sort of, if the concentration is changing in ways, then maybe they shouldn't go in the same direction. In this case, the strategy is they can't really steer as well as other organisms, so to tumble different ways. So kind of a behavioral level strategy given rise to these two plots. This is what it looks like when you actually image them. These kinds of ideas came much earlier, but this is what it looks like when you try to image them. So you can see the cell body. You can also see how they move. There's a set of flagella attached to the cell body in this low Reynolds number environment of your gut. For example, for E. coli, these flagella sort of, when they're going in a roughly straight line, these flagella are wrapped together to form, there's maybe six or eight of them, form a single propeller. They all turn in the same direction and you propel like a boat, okay? That's when you're going in a straight line. You can also see some of them, they go a little bit out of focus, but that was a tumbling one right there. So if you don't wanna go in that overall straight line, then you switch one of the directions or multiple directions, multiple motors that are running these flagella. They go in an opposite counterclockwise or clockwise fashion. They beat against the other flagella and that tumbles you and you move off in a different direction and that's sort of the imaging of the behavior that underlies these trajectory motions, okay? In a cartoon, just to make it clear what you couldn't see from the real data, this is then what it looks like, you have a cell body and then you have these sort of long flagella and you can then sort of really take apart the molecular mechanisms that guide these motors. So there's a full understanding, here's the base of the flagella, so it's turned by a whole set of, of sort of a motor complex here and it's a single cell so it doesn't have a neural brain, but it has a chemical brain and you can also interrogate this chemical brain by sort of looking at different, sort of by imaging concentrations of different kinds of molecules and at every stage of this process in this single cell, there are analogs to what we'd wanna do in more complex organisms. We want to be able to image the behavior in some way, okay, in some high precision way so we can see what's going on with the posture. We of course want to understand the underlying strategy that the behavior sort of implements, right? That's over here. We want to be able to understand sort of the mechanisms that guide that behavior and a version for animals of this chemical brain is the neural brain as Araviy was showing before and it's only really relatively recently that both the ability to image the behavior at high resolution and of course to interrogate things like the brain simultaneously have become available, right? And that's a template for what I think you'd like to do in other organisms. Okay, before I jump in, I just wanted to give you sort of a quick look at some other experiments on the theorist but sometimes you just can't wait for data. You really, there isn't a whole lot of the kind of high resolution behavioral posture posture scale behavioral data that we're interested in. So I was curious about not just individual behavior but social behavior. So we built a 3D imaging system for swimming adult zebrafish. It's one of the other genetic model systems for which you can do any number of perturbations genetically. So it looks like this sort of a cubic world, 60 centimeter cubed here with three orthogonal cameras and what's powerful about the last couple of years, some speakers already mentioned it is especially convolutional networks, designed to mimic a little bit of our image of our image processing capabilities but that's not really relevant here allow you to label sort of points on animals without marks and they allow this marvellous tracking of relevant points on animals. And in this case, just showing you example of the views from these three different cameras where we're just labeling the head and a midpoint where the first set of fins sort of stick out. You can, we can do more, but you can do this sort of with networks and that opens up a behavioral landscape that's just sort of exponentially growing. And this has really been a revolution in the last couple of years at one of the interesting things about having a 3D, so fish obviously swim in 3D, but what really goes on in 3D, nobody really knows yet. And you can see that having multiple, in this case you might try to get away with two cameras but having multiple cameras, three different views here, you don't always see the fish in the same profile, that makes labeling the head or labeling them the midpoint very hard. So you're not always getting the same data from all the cameras, but you hope that they can come together and so you get a trajectory and that's what you get. So here's an example of these two fish. They're sort of these are, I think these are two male fish, there's just sort of a test trajectory, they're hanging out in this volume, they're definitely strongly interacting. So zebrafish are a very social, social interact, sort of socially interacting species. And then of course for us, eventually once the data is good enough, so there they go, you want to sort of understand what gives rise to these interactions, how do I really sort of characterize these trajectories in some interesting way? And ultimately if you, designing representations or characterizations that then allow you to manipulate, maybe you can go back and sort of manipulate the social brain or social parts of the brain in some interesting way. Okay. So can you do this analysis in real time or is it? This one, no, not yet, but two years. It's just on this crazy curve, right? And just to say that there's a lot of sort of talk about neural networks and pattern, their ability to find patterns and whether, even to the extent of whether they're doing physics of some kind or another, and all of that's kind of secondary for us because if you're interested in behavior, this first step from image to posture, you don't care how you get there, you just want to get there and make sure that it's relevant. So that's why these are really powerful for us. But it's changing so rapidly that two years, no problem. And these are adults, so they're not transparent, but there is now a transparent strain of zebrafish so you could even imagine imaging the neural behavior. I don't know if it's two years, but eventually. Okay. Now I want to get into the meat of the talk, which is a system where you don't need any of this. You don't need deep networks. It's a good exercise for how to think about first posture and then behavior, okay? And this slide is really just to set up the problem. So it's C. elegans, so like you saw with Aravian, we can at least sort of have a crawl around in two dimensions, it does live in three. There's absolutely all kinds of interesting behavior in three dimensions, that's a lot harder to image. So at least this isn't sort of its full natural environment, but we'll at least let it sort of crawl around on a two-dimensional auger plate. The representation of the posture of this organism is then really straightforward. So I get an image like this, a high contrast image, easy to take. This is a tracking microscope and these are experiments that were done in collaboration with Will Rue, University of Toronto. So out of that posture, then you can extract for this animal the relevant degrees of freedom or shape. Shape is encoded geometrically as the geometry of the centerline curve. So I can extract that curve from this image, describe that curve as a sequence of tangent angles. I choose a representation that's high enough dimensional so I get all the little wiggles that I think are important for that experiment. In this case, that's a hundred, but that's a number that's up to you. So and from those hundred angles, you get then a space of posture, but just like you, of course. So the other reason I didn't mention at the beginning, why behavior I think is a really interesting signal to pay attention to is it's demonstrably much lower dimensional than the brain, right? So you have all these degrees of freedom. You have all these sort of degrees of freedom, but you generally don't use them when you behave. You could, but you also inhabit a lower dimensional space than your sort of actuators would allow than your degrees of freedom would allow and a much lower dimensional space than whatever, the hundred billion sort of neural firing space, okay? So there's a sense in which, although it's a complex signal, it certainly seems like it might be an easier signal than the brain itself. Like you, C. elegans also inhabits a much lower dimensional space in the space of postures. So in principle, it could have been a hundred dimensional space in practice. It's approximately described by four or five dimensions using simple techniques like PCA. The lesson here is not so much that the worm is low dimensional, although that's convenient. The lesson is in two dimensions, actually a four or five dimensional space is quite a lot. You can generate very complex two dimensional shapes just by superposing four sort of fundamental directions and that's really the point here, okay? So this four dimensional space that you find using PCA on the space of shapes of this worm is rich enough to be the same space from worm to worm. It doesn't depend on which worm you use. It doesn't depend on what's going on with the worm. So you drive something like an escape response. You'll see, I bet it would work with mating, with a Ravi. It doesn't matter what mutants you use, more or less it is possible to make a long mutant in which you need a few extra dimensions, but okay. Okay, and for us, powerfully, it's also interpretable in terms of the behavior. And so there's a couple of dimensions. The first two of these dimensions correspond to, I'll talk more about that, correspond to a traveling wave, describing the worm going forward and backwards. The third dimension, this one, corresponds to, sorry. Third dimension, this one corresponds to turning, all right? And at the end of the day, it's sort of, where you going? Okay, when the red comes on, we're gonna zap the worm with a laser heat impulse. It's gonna cause the worm to do an escape behavior. And on the right, you have the posture. On the left, you have the kinematic space, in this case, reduced to three dimensions. So you can see kind of what the time series of behavior is of postures in that three-dimensional space. It went through an interesting set of postures called an omega turn, where it's, what's really happening is when you zap it, the worm reverses and then brings its head around to its tail and goes in the opposite direction. And that level of description, however you get the posture modes, having sort of a rich video on the right and a kinematic posture space on the left. It's something that we'd like to have for our fish and for other animals that we study, okay? But what's missing in all of this, and really sort of the point of my talk today is that although you can interpret, some of these postural modes are interpretable sort of directly in terms of the behavior, this is really kind of a, this is a kinematic picture. This is not a dynamic picture. This is not a picture of behavior. It's a picture of posture, right? It's a first step. But what we really need and what we don't really have is a good understanding of how to think about dynamics. And that's not just about behavior. I think in the world of living systems, also in physics, there's a lot on the statistical side of understanding things. There's not as much now on the dynamical side of understanding these used to be closer together, especially in interesting branches like turbulence. But I think it's time, especially with the kind of dynamic time series that are coming from experiments now, there's good opportunity to bring these pictures together. And that's also part of what I'll talk about. So how do you go from this kinematic space of postures to a principled space of dynamics? All right. So physics likes simple harmonic oscillators or even not so simple harmonic oscillators if you include the full non-linearity. So here's a pendulum, right? And you all probably know the equations of motion for that pendulum, maybe more recognizable in the small angle approximation, but anyway, right? Those are the equations of motion. That's the system. It might be useful. In fact, there are a number of current efforts to uncover equations of motion that describe whatever dynamics you're sampling. But one, that's really hard. Two, it may be that you write down the equations of motion and you don't know what's in them. If it doesn't take very many non-linearities in even a low-dimensional dynamical system before you can't tell me the real behavior in that system without doing some sophisticated analysis on the dynamical system. So although that's an interesting approach to try to write down those equations, it could fail in terms of really helping you understand the system. An alternative and equivalent view to those equations is to think about the trajectories of that system, not in the kinematic space analogous to postures, but in the full state space, the phase space of the oscillator, okay? And once you, if you're able to look at these, sort of these geometric trajectories, sort of this is a picture, I guess sort of strongly advocated by Poincare. This picture, the geometry of these trajectories contains similar information to what's here in the equations. And this, so here, for example, around the origin is simple harmonic motion. Do you give enough energy to the system? It will then sort of cross and flip all the way around. And maybe the right thing to do to understand the dynamics of whatever living system we're working on, in this case, the dynamics of behavior is to try to build approximate versions of that state space, that phase space. Okay. Yeah. So while you look at this crazy complicated flow diagram, I'm gonna motivate this a little bit. Excuse us for the, can you see if I draw like that? So when I looked at the first two modes in the posture space of the worm, and I just sample the data, I'd plot it, then I got something that looked like that. And that's really interesting. And whenever you find a circle in data, circles in data are good things. But there's something that's really important and clearly missing. So if you look at the dynamics of the worm, so what does it suggest? It suggests that the dynamics of the worm are confined to this roughly, approximately confined to this circle. And in fact, forward crawling, let's say that that's forward, would correspond to going around the circle in one way. But backward crawling, so we'll call reverse, corresponds to going around the circle the other way. Yeah. Let that be light. What's that? Let that be light. How does that work? Which one? Okay. And what this means is that when you're watching the dynamics in this posture space, this kinematic space, you will see these abrupt transitions in physics language that trajectories cross. This is not a phase space. You're missing degrees of freedom that would tell you that you're actually on a reversal trajectory instead of a forward trajectory, which is obvious. So you might ask, where are those degrees of freedom? How do I find those degrees of freedom? So one thing that I might do, since where my drawing is probably gonna really crap out, you might think, well, maybe it's just in some higher, some other time series in the same kinematic space that I'm measuring. So roughly speaking, if I didn't measure A3, then, well, so maybe forward crawling is like this. Actually, let's do this. So forward crawling is confined to this plane. It actually does turn out to be true that when you reverse, see, that's where my drawing is really gonna stink. It turns out that a lot of times when you reverse, you are exciting this A3, which is a curvature mode in positive and negative directions. So in fact, that will lift most of the time that will lift the degeneracy between going forward and going backwards. Trivial example, okay? But of course, that's by eye, all right? So what you need is a technique to do this, first of all, in much higher dimensional data than I just wrote down for a system, and kind of a principled way to do this. So you need kind of a, we need a state space. We need a way of including all the degrees of freedom so we have what we think is a good approximation or our best approximation to an instantaneous definition of state. And that's what we don't have. It's also, it's the same thing as saying, I have these modes, A1 through A5, but I don't know whether the full dynamical system is a first order dynamical system, so A dot is equal to F of A, or a second order dynamical system, and so on and so on. So you need to look for those other variables. And that's what this is about, okay? So you have some underlying dynamical system, in this case represented, I think, I'll leave that up. Okay, in this case represented in this dynamics, but this is not the dynamics that we actually see, okay? What we actually see is these high dimensional measurements here given by different snapshots in time, and what we do very straightforward, okay? If you're, and you can imagine why this seems to make sense, if you're looking for an extra derivative or then what do you do? You add another frame, right? That's enough information to, in principle, give you the derivative information between time t and time t plus one. If it's a second derivative, you add a second frame, conceptually, and that's the idea. So the idea is to blow up your configuration space by including delays, okay? So these k number of delays, sort of building a higher dimensional representation until you think you have enough, you've actually added enough lags. Within that new high dimensional space, you can then look for a lower dimensional representation that turns out to be pretty useful in many examples, and then that lower dimensional representation, which will now be a lower dimensional representation in this with these lags, will include the kind of derivative information that you thought you might be missing, okay? Historically, echoes of this are sitting in sort of the delay embedding theorems or delay embedding approach of tokens and so on, but that was those sort of historical, those ideas weren't that easy to apply to the kind of data that we have today. They weren't multi-dimensional, for the most part, and they weren't able to handle robustly noise, okay? But now is the time to think about that, and they're interesting. In, if you have a deterministic dynamical system, there's lots of theory behind how you actually, what this actually does. Another way to think about this is by adding these k lags, what we're essentially doing is making a time-stale separation. All the dynamics or frequencies that are shorter than that, higher frequency, are shunted into your definition of state, and then those state variables evolve on a longer time scale, and you're looking for that principle separation, okay? So, that's all fine, but the way that you build this higher-dimensional embedding is determined by two parameters, how many lags you choose, and then ultimately, once you have those number of lags, how do you do dimensionality reduction? I should say one other point which makes this, I think, an interpretable nice approach is that when you go to really long k, large k, in fact, in the infinite k limit, you will recover the power spectrum, okay? Which is usually not very useful, actually, for the kinds of systems that we study. In the limit of very short k, at least in deterministic systems, you will recover derivatives, right? And you can imagine wanting the flexibility to interpolate between those two limits. So, the idea is we build sort of a measure of prediction. I'm not gonna go into, I can talk, I think, about the details offline, but I can give you a sense that we choose these parameters, k and m, sort of the number of lags, and then dimensionality within the space of lags, by asking how can we build a local predictor that's our best local predictor? And this is what it looks like for the work. We start with this configuration space. So now, think of a five-dimensional, think of a four-dimensional, I guess, there's only four, think of a four-dimensional time series. But now we're expanding by adding the different lags of that time series. In this case, I think the sampling rate is 16 Hertz. So, and we're looking at local predictions. So we wanna include enough of these lags until that we sort of sort of asymptote here before we start losing predictive power. And that's about 10 or 11 in this case. It corresponds to about half a wave for the undulatory motion of the worm. And then this isn't super sensitive, of course, to what you choose for this k, but it does give you a guide sense of where you should be. And then within that space of lags, there's a very easy to see low-dimensionality reduction. So again, we're looking for a smaller number of dimensions within that high-dimensional space. Remember, once you get to k equals 10, for example, you're really sitting in a four-dimensional, sorry, 40-dimensional dynamical space instead of a four-dimensional configuration space. But that 40-dimensional dynamical space is actually well-described by now six modes. And those six are interpretable for the worm. This is what they look like. So these modes are short snippets of behavior. So little motifs. And they come in approximately conjugate pairs and they come in three. There's just about three conjugate pairs, one of them corresponding to forward crawling. Now we've lifted the circle. So we now have a distinct behavior for reversal crawling. That's a distinct plane that's here. And then we have a third cycle for turning. These numbers, of course, whether you get six or seven or five, that will depend on, in R in this forging experiment, it doesn't depend on the worm you use, but it could depend on the experimental conditions and so on. So the exact numbers don't matter as much as the technique. And in this case, the interpretability ultimately of these kind of cyclic trajectories. So that's nice. It lists this degeneracy, gives you a sense of the behaviors and maybe you would expect this, but it's also nice that these are the canonical, broad categories of forging behavior people knew about. But there's some interesting observations even at this level of this space. One of them is it's not just, each one of these behaviors isn't just a cycle. So one of the straw man models for how you think about dynamics might have been that a behavior, as I was talking with some people at break, is, for example, a limit cycle. That would be a very stereotyped behavior, stable, you'd fall back onto it, but these aren't limit cycles. And in fact, the variability of these trajectories, we think, holds insight into the control system that the worm's actually using. Now, you could say maybe limit cycles is too straw man of an argument because if it's a stable limit cycle, how do you get off to move yourself around? Okay, but it's a counterpoint. Okay, to give you just a little bit more insight into what these different planes in this phase space mean. So those are the planes. Here's an example, kind of like what it is actually like what I showed you with the kinematic space. So you're following the worm with an infrared laser, you locally heat up its head, it's an inversive stimulus for high heat or high intensity that causes an escape response where the worm goes backwards, then turns around and does this omega turn, then goes off in a different direction. If you just look, this is very empirical. If you just look at what's going on in the phase space and measure how much, what are the amplitudes of these different variables? Oops. So a turning amplitude, a reverse amplitude, a turning amplitude and a forward amplitude, then they're kind of stable here. This is where you fall onto mostly the reversal plane, the other planes are suppressed, then later you go, you sort of fall onto the turning plane just to give you an example of what it means to move around, I can't plot a six dimensional space, but to give you a sense of what it means to move, as your behavioral dynamics move you around this six dimensional phase space. Yes, that's a great question. There are four kinematic dimensions. It's like saying, but you don't know that they could be derivatives. And that's the reason why the state space, the phase space, the dynamical space can be a different dimension, a higher dimension than the kinematic space, right? And your capturing behavior, yeah. I think probably you had a phrase to increase the number of the time where you reuse the space. Oh, that you can just do by SVD. We do do by SVD, to be precise. So once you have this 40 dimensional space, then you do SVD in that 40 dimensional space. And then you're organizing those reduced dimensions by their eigenvalues, by their singular values. What's that? So like this, like I was just showing. So that is at the, oh, I didn't, sorry, I skipped over what I was actually showing you. So just color code these, now there's six dimensions, color code them by something we all understand. How fast is the warm moving in a centroid speed? So that's centroid velocity, positive is red. So that's the forward crawling. Blue is negative, so that's reversal. And also color code the trajectories by a turning rate. And that's why we can interpret that as turns. So you're going into the centroid coordinates just to build some intuition for what these variables mean. Good point. I should also mention, if you take this technique out of context and behavior, and apply it, for example, to Aravi's neurons, then you'll, so he didn't mention some of the technical details. I think they're doing it, but I certainly know Andy Leifer and Zimmer do it. There was sort of an agreement that came into the field about it's not the activity levels of the calcium traces that they're interested in, or that matter, but the derivatives. So there's some derivative filtering that's done. You could ask why. Who invented that? Turns out that that gives you a good picture of the worm neural dynamics. If you apply this kind of technique to that data set, you will discover that you need derivative filters, which I think is confirming. OK, that was a side point. So you construct the phase space. Like the oscillator, but the question is, what does that really tell you about how the worm moves? One hint to the interesting dynamics that's happening in this six-dimensional phase space that unfortunately I can't plot for you is given by the fact that the variability in this space seems to be large in the following sense. So first of all, in the empirical sense, there wasn't a single cycle, even for forward crawling. There wasn't just a single cycle. There's a whole family of cycles. But in another sense, if you look at any two points in that phase space, the classical way of thinking about dynamical systems, and you ask how nearby trajectories diverge in time, then that's this delta. And that delta, at least in early times, has an approximate exponential form of divergence. So the right comparison may be, for example, to a noisy system where you would expect more like a power law. This isn't the difference between exponential and power law. It may be hard to see. But it appears from this relatively simple measurement that you can do in a phase space that the variability as measured by the local divergence of trajectories is actually large. What's interesting is that we can come up with a model for what produces that variability. So if you had a fully chaotic system like the Lorenz system, as an example, a deterministic chaotic system, then one way to think about the behavior of the Lorenz system isn't through the instantaneous dynamics, but it's in terms of a family of what are known as unstable periodic orbits. There's a skeleton to the dynamics that you can think about as being orbits. And you classify them by how long they are, by how many cycles they go around. And you can measure how unstable those orbits are by just sort of looking at the local trajectories, at the local Jacobians around those dynamics. And we see them. So it's a little small. Here's in the worm. This is our spectrum of unstable periodic orbits. So every worm moves a little faster and a little slower, so you have to sort of adjust for that minimum speed. And then you just sort of, you look for recurrences in your data, and you count how many at different periods. So you get a spectrum of these orbits. Those orbits are, yes, you can derail. There's nothing on the plate that we're controlling. We haven't seated the plate with bacteria or fruit of any kind. So this would be broadly characterized as exploratory or foraging. That probably has something to say about the conditions that you see. And for example, in the kind of what you would think of as the more stereotype responses, like an escape response, this variability drops. Now, it doesn't move away from a, it's still chaotic, but it squeezes the variability. I don't have a slide for that, but that's an observation. So it's like you can, the variability can be modulated, I guess that's the takeaway. Oh, they're short. Yeah. Absolutely. So this is the spectrum of these unstable periodic orbits. Most of them, if not all of them, are unstable. That's the sense in which there isn't a single cycle. There's a whole large family of these cycles. These are what are called the Flo-K exponents of those unstable periodic orbits. And if those orbits are why you see this quasi-exponential variability, you should be able to recover this growth, this curve from these orbits. And that's this plot all the way at the bottom, whereby, including different orbits and their probabilities, we get a measure of this exponent. And so the gray line is the error in the exponent. The blue line is how many orbits we're encompassing. And we're right in the error bars. So that is that these unstable orbits actually provide at least a trajectory-based understanding for why you see as much variability as you see. Also, just as a side note, those orbits are really interesting. They're longer timescale things. We're doing this on k of 10, which for the sampling rate is about half a wave. But these orbits correspond to real behaviors that can last seconds, much longer, in the tens of seconds, if you can really. So it's also an interesting way to try to extend your local state space analysis to include longer timescales. OK. Something that, at least for me, was even more surprising. So we have this picture that we're building out as we can piece by piece. That's sort of exploring the deterministic origin of the variability that we see. The first part, the last slide, was an overall measure of variability in the local trajectories, how fast they expand. In any n-dimensional phase space, you have n directions in which you can expand or contract. And that means that, in principle, for the six-dimensional phase space, there's not just one way of expanding or contracting. There are six, and these are quantified, on average, across the whole phase space by something known as the Lyapunov spectrum. It's a measure of how a six-dimensional ball in your phase space will expand or contract along the different directions, averaged across all the places of the space. And when you calculate those exponents, which are tricky to estimate from data, but we think we have that under control, you see a spectrum that I think was certainly, from my perspective, unexpected. It is almost time-symmetric, which, in physics lands, you would say it's almost Hamiltonian. If you had a Hamiltonian oscillator, let's say a chaotic oscillator, a two-hinged pendulum, because of time-reversal symmetry in the Hamiltonian, you will have chaotic exponents. That's the chaotic part of the double pendulum. So you have one chaotic for a double-hinged pendulum. You will have one chaotic exponent, one negative exponent, exactly at the same value. It's an area-preserving dynamics, Louisville theorem. And you will have two zero exponents, one of them corresponding to a continuous dynamics, corresponding to time-translational symmetry for the dynamics. So the Lyapunov spectrum can be a way to understand the symmetries that are present in the system. The symmetries might give you a strong clue at the end of the day as to the underlying model. No one yet has been able to write down a model of even this relatively simple foraging behavior that seems right. So why is that? What is the class of dynamics that we're actually trying to model? That's kind of the whole reason for our work. And so it's approximately time-reversal symmetric. But it's symmetric, not about zero, but about this negative point here. So let me just take you through some of the points that I just said. There is an exponent that's near zero. We don't know how to interpret that it's not exactly on zero yet, but that would correspond to approximately continuous dynamics, which would also mean that whatever noise is in the system, it's not large. So when you have really noise in your system, it's going to appear like a map. So it's going to appear that you're not going to have this time-translation symmetry. So it's approximately zero. There are two chaotic directions. I wish I could tell you what that means for the nervous system, that's for future work. So there are two chaotic directions. And the whole thing, so the volume expansion, which is the sum of all these exponents, is negative. And the system is symmetric about this negative point. So what does that mean? One possibility, so if you go back to this analogy with a Hamiltonian oscillator, and you add a little bit of dissipation to that oscillator, and then you're going to have to add some drive to keep it going, then this is the kind of spectrum that you would see. So it's starting to suggest that, and in retrospect, maybe it's not so surprising. At the end of the day, the worm moves by bending its body that those bendings may be approximately elastic. There are models that are out there like that. I don't have a good model for what produces it. So that's a little disavowing. Yeah. Yes. So I wish I could stand confidently and tell you what we think the control is. But that we think is a really strong clue. And that's the next thing that we're really in the middle of that now. Absolutely, right? Yeah, absolutely. OK, yeah, so this is conjecture at this point. But we're getting closer to saying that it's real. So we think we might have a system that's kind of quasi-Hamiltonian system. Overall, with a dissipation here, different drives, the Hamiltonian accounts for the body wall, the body couplings, and so on. The picture to have in mind then is that you have all these trajectories in your space. Different bundles of orbits may correspond to different behaviors. And once you build that picture, so in the absence of external stimuli, this naturally chaotic dynamics might be how you explore space. At the end of the day, you might think of exploration as stochastic. But maybe what's really happening is that in the absence of external cues, you have an underlying dynamics that naturally brings you across. It naturally moves you from this red bundle to this yellow bundle. You just sort of diffuse around the attractor. And then, of course, when, so we have a term in here which I haven't explained, which is a real control signal, when a real important stimuli comes on, then you might sort of push yourself, not staying on this phase space. That control signal might actually sort of push you over and then on to one of these trajectories. And we do have evidence that that's what happens, that the control signal is really active early. So for example, suppose you need to be, purple is reversal. Suppose this is escape response. And you need to be on this purple trajectory because that's what's going to take you back. Then what we think happens is, suppose you're here, then a control signal will just push you. And it will be active. And it'll push you over here until you're on this reversal. And then it will stop, kind of the way you might control a chaotic satellite or something like that. It's conjecture. All right. No. No. OK. In the last part, right before I end, so all of that was essentially asking, it's a really simple point. Instead of taking your data at face value for whatever format comes, look for a rich enough space that includes so that you have an approximate state space. I think that's a general point that will be valid in multiple data sets. And once you have it, then maybe there's really strong clues, at least in the relatively short time dynamics, about what control is. And in this case, the control that we're thinking about is shorter time control, which is probably activated by muscles and neurons controlling the muscles and so on, right? Kind of a short time dynamical systems picture of the state. But you don't have to stop there. And so for example, you could take your face space that you've reconstructed. Actually, you can even use this process to reconstruct a state space in the first place and start discretizing it. And of course, this can be done in a way that doesn't pay attention to the underlying dynamics. That is, we've been our data. People have been data all the time. But there's actually a principled way to partition a dynamical system in such a way that you should worry, generally, about taking a continuous thing and discretizing it. It can have important consequences. But there are at least in certain limits there are theorems of dynamical systems that allow for a partition that is discrete, finite, that it's called a generating partition and still captures the full dynamics of your system. And that's kind of nice because there's some things that are much easier to do in discrete systems. It might also be a way for us to control noise. So you can discretize this face space in a principled way. In particular, because it's a continuous dynamical system, at least in the approximation that it's Markovian, or sorry, that it's stationary, that partition can be approximately Markovian. So that the full dynamics is simply given by this stochastic this. So this is the state. P is the six-dimensional vector describing the state. This is dynamics. And that dynamics is then given by something known as a stochastic matrix, which you sort of fill in from your partition of your data. And that opens up all kinds of interesting connections. Once you have this partition, you've turned your continuous dynamical system into a symbol sequence. In dynamical systems land, you could have gone to the continuous system and measured entropy of a chaotic dynamical system corresponds to the sum of the positive Lyapunov exponents, at least in most substantiations. But we're much more familiar with the Shannon entropy, the entropy of that symbol sequence. And they're the same thing if you have the right partition. And once you have the entropy, that becomes an interesting, I think that's going to become an interesting measure of general measure of variability in the dynamics. Then, of course, you can go to all kinds of different mutants and ask what's really controlling the variability. But you have a measure of variability in the entropy that we think is quite general. So that's useful. And OK, I just mentioned that point. One last point, just to finish, this is kind of work that we're expanding on now. And that is once the dynamics is framed in this stochastic matrix Markov approximation, the dynamics are now easy to order in terms of their time scales. So because this is a stochastic matrix, there's always, first of all, this is what the dynamics look like in general. It's just raising the stochastic matrix by k steps in the Markov approximation. The eigenvalues of this matrix, there's always a unit eigenvalue that corresponds to stationarity. If it was a ergodic deterministic dynamical system, the eigenvector corresponding to the eigenvalue is something called the invariant density. That maybe isn't as interesting to you. But this orders the dynamics so that you can look for slow dynamics. That is, you encode the short time nonlinear dynamics in the partitions that are coming in principle from the phase space reconstruction. And now you have a way for looking for slower time dynamics controlled, for example, by neuromodulators like dopamine and so on. And I think that also is at least a promising approach. All right. So I guess three themes, representation, kinematics, or you should think about the state space really reconstructing a state space. There's probably a lot of predictive information in that space. Dynamics, exploring the confluence, the sort of what we can learn about the geometry in the dynamical systems that we find. We think this has a strong evidence for the control system. I'm looking for ways of thinking about nonlinear control, which doesn't add it if you have a book, or advice. But we think this has a strong hint for how to think about control, which is really the problem where we want to be. And then either in these Lappenhof exponents on the dynamical side, or the entropies on the statistical side, this principal measure underlying this is an idea that variability can be a very important signal, a very important clue into how the system is actually functioning. And by quantifying that variability and understanding its origin and perturbing it, maybe we learn again a lot about the control. And just the last point is also more speculative. But one of the interesting things about thinking about behavior is if you're a human and cognitive scientist, you think of humans as doing all these complex things. But most of the time, and animals do something not as complicated. But those are anecdotes that are mostly uninformed. We don't have strong, we haven't had strong understanding on the quantitative approach of animals, of the behavior of animals. So in building these behavioral representations, incurring quantitatively animals under increasingly complex situations and with increasingly complex brains, maybe we find in much the same way that when we started looking at stars and the motions of planets and so on, our unique privileged position in the universe disappeared, maybe we find that our unique, privileged location in the space of complex behaviors is much more of a continuum than we thought it was. I'll leave you with that, and thank you.