 So I guess that's the only advantage of being the last speaker. So, oh, that's the wrong slide. I was seeing something else on my screen, right? So my lab primarily looks at development until this workshop. I never thought of development as navigation and locomotion, but there is a hell of a lot of navigation and locomotion during development, which I'm not going to talk about. And the cells have to go to the right place, adapt the right fate. And just to give you some feel for what we do, we take human embryonic stem cells, for example, and try to build things, not by trying to organize tissue, but just by touching them in the right place. And here are our attempts at building a human eye from human embryonic stem cells. And you see on the left side, it fails. On the right side, you get a beautiful retina, broad cells come up, things wire up right. So the question is, how do we control these biological systems so that we get what we want them to do? And that's what I'm interested in. So biological systems, there is some input. There's an animal or a cell. You get some state decisions, like in development, you get fate decisions, and then you get movement. And what I would want to ask is, can I just forget about inputs? Go straight into the box, poke around, and make the damn thing do what I wanted to do. So that's sort of the goal, basically. And so to ask the question differently, there's amazing amounts of beautiful biology looking at what necessary things are for something to happen, and we want to ask, what are the sufficient things for something to happen? And so basically, our goal has been to convert these systems into video games and make them do exactly what we want them to do. The problem is that inside this box is a nightmare, and it's got some horrible network and lots of nodes. And to be able to control the system, you have to figure out which nodes to poke and what the dynamics of the poking should be to make the animal do what you wanted to do. We'd love to do it in human embryonic stem cells, and we're working hard on it, and we've been using C elegans primarily as a test bit to test computational ideas. So it was sort of a side project that took on a life of its own, but I'm gonna tell you about the side project. Are we introduced C elegans really nicely? This is C elegans there. In green is the nervous system. I'm gonna play a video. I hope it plays. Some of the neurons are not playing, but you can see these little animals crawling to food, okay? So they're all sort of moving around. This white patch here is bacteria, and the animals are moving around finding food eventually. The circuit, as Ravi pointed out, has a set of sensory neurons, which signal to interneurons. That's the poor man's brain, and then motor neurons to get the animal to do what you want it to do. Roughly 100 sensory neurons, 100 interneurons, 100 motor neurons. So, ah, there is the movie of the brain of the animal, if you like. The question is, can I just poke around with the internal wiring of this animal to make it think there is food and walk towards food, right? So here are animals moving towards food. I don't want any food there. I just want to play around with the brain of the animal, make it think there is food and go. And we've actually done it, and I'll show you a movie. Here, and I'll show you how we do it as we go along with the talk, we're playing around with neural activity inside the animal with precisely the right dynamics and the right neurons. The animal is going to think that there is food in this direction, and since we are doing it all with light, we can flip, make it, have optical illusions anywhere you want. So here you go, I love watching this movie. It actually does it better than odors or bacteria the way we do it. See, we flipped it. It's like, oh my God, where is it? It'll look for a while. No food anywhere. Just, just, just playing around. And then it'll start going again. And I want you to notice something here. It's crawling on this agar plate, completely free, no inertia, over-damped system. But look at this, I'm gonna flip it again, and the thing will keep running. So there is this inertia in the system. At some point it'll go, oh shit, and then come back. It'll happen there, there. So then it'll start again. And the behavior is exceptionally robust. Every animal does exactly what we wanted to do. No variability in the system. Quite unlike this attempt at building eyes in a dish where every system has its own idiosyncrasies, 50% of the time it fails automatically, and then you don't know what to do with such system. So the question is, how do you get it to be robust like this? I will, we'll come, we'll come. This is like, just so that you pay attention for the last talk. Oh no, you missed a slide, you were on your phone. I can come back to it. Sorry, I'm just sort of making fun of you. Okay, so I think there are horrendous challenges here, and we don't even know what to start with. And so I'm just gonna tell you how we think about the problem and why we're in serious trouble. Here is some horrible network, right? Let's say I wanna control me and make me scratch my nose. I have some horrible set of neurons in there. I want to be able to poke the right neurons to the right dynamics, right? And as I said, I don't know which neurons to poke precisely and which dynamics to impose so that I control the system like I wanted to. So let's take this brute force approach that everybody in our community is taking, which is let's just measure every bloody thing you can measure, measure every neural activity, every connectivity, every neurotransmitter, and let's just sort of think through it as to how you would even look at this data. So record everything, measure everything, pretend you have all of the data from some number of replicates. And from that, you want to be able to do the kind of experiment that I just showed you. So the question is even in principle, if you had all this data, what are the challenges, right? And I think some of the talks earlier in the session where we had all these search trajectories, I think same sorts of high dimensional problems come in and the problems are fun but exceptionally difficult. So let's say there are 10 to the seven neurons. I have all the data from all of them. Let's say that I have measured 10 features per neuron, okay? So I've measured spike delays, spike rate, some connectivity, some things, let's say 10 features. So I have 10 times 10 to the seven simultaneous measurements from the system. That's 10 to the eight measurements. Every one of these measurements you can think of as an axis in some high dimensional space. So if I have 10,000 genes, I can think of every gene as one axis in the space, right? So in this case, neural activity, 10 to the eight axis. And the data lives in this 10 to the eight dimensional space, which is hard even for string theorists to comprehend, I think, right? It's sort of very large spaces and it's very non-intuitive. So dimensionality of the data space is 10 to the eight. We live in three dimensions. This is one followed by eight zeros. Volume of the space is exponential of 10 to the eight. If you're optimistic, you've performed this horrible experiment where you've measured everything from one person maybe 10 times. So the data density you have is 10 divided by exponential of 10 to the eight, which is basically zero. So the problem is if you measure everything, you have no data, you can never have enough data. I don't care how much time and money you have, there is no way you're gonna fill up the space, in which case conventional statistics breaks down and this whole brain recording, we've been very strongly against as a result. I don't understand how to analyze the data as a matter of principle. So that's challenge one. Challenge actually gets even worse than this, but before that, the only way then to increase data density is to throw out things that you've measured. If you measure in 10,000 dimensions, but then squish it down to two dimensions, you have enough data to do some statistics. So then the question is, what do you throw out? And so dimensionality reduction is important, but that brings us to challenge two, which is also exceptionally hard. Let's say I have D dimensions, I do some PCA, how do I do PCA? I take two data points and measure distances between them. So I have this 10 to the eight dimensional space and I'm measuring distances between two points in the space in D dimensions. Let's say I wanna control the damn animal and only about, instead of 10 to the eight, 15 neurons matter. The space in which the distances should be measured is in this 15 dimensional space and not 10 to the eight dimensional space, but you don't know what these 15 dimensions are, but let's just say there is a relevant signal in some dimensions which is much less than D. So the relevant distance is the distance in space D1 and not the distance in space D. So if you measure this to do your PCA, which is a starting point of all analysis, linear or non-linear or measure correlations and look at the correlation with this quantity here, which is the relevant distance, the correlation goes like, it's easy to show square root of D1 over D. So essentially if you don't know what you're doing and the signal resides in some small subspace, you don't even know how to do data dimensionality reduction, one would argue. And PCA has to get it wrong, just by this simple argument. So you're doing Pythagoras theorem, you have a whole bunch of squares sums onto the square root. If most of the squares are useless then you're in trouble. So problem with brute force approach of measuring everything and trying to find these control nodes, to me sounds horribly depressing somehow, basically I don't know how to do it. And we've been trying hard to do it. In the case of C elegans, it's a joke, right? 300 neurons, how much can it do? But once you go to human development and looking at stem cells or 2000 transcription factors you have to play around with. And there is no way a screen will get you anywhere if three of the 2000 matter. 2000 choose three is a very large number. So I'm gonna present for you one approach inspired by very nice developments in math. And as I said, most of you know it, but the talk is for students. How do we find these key neurons? And how can we do it faster somehow? And the title was how to do it fast, but actually I don't see any option besides doing it fast. So the thought was the following. Data density is zero, it's one divided by 10 to the eight in the last argument. You have to throw things out anyway. If you're throwing things out after measuring them, why not measure less to start with? Why should you measure everything? If you could measure less, because you're gonna throw things out anyway. And you don't know what to throw things out either, so let's go to the other extreme and measure less. So the question is, how do I measure less? So let me give you a toy problem. Let me start with a simple toy problem. I tell you, three x plus y equal to 17, solve for x and y. So I get six equations, 64 variables. Solve the answer for me. How would you do it? You would say it's not possible, right? Because I need 64 equations for 64 variables. I'll give you a high school puzzle. Yeah? Mic, mic, mic, mic, mic, mic. I hate mics. This thing doesn't pin in either. Sorry. So let me give you a high school puzzle while I pin this in. I have 64 coins. Damn. I'm refusing to admit that I'm getting old as a problem. I don't want to give into bifocals yet. So I have 64 coins. This is a high school puzzle that you've done. 64 coins, one of the coins is heavy. Find me the heavy coin with the least number of measurements. How would you do it? Take the 64, divide it into two groups of 32. Take the heavier pile, two groups of 16. Five measurements or six measurements, you can find the heavy coin. So suddenly, 64 variables, you've found the answer with six measurements. So that's sort of the crux of the idea. And there is some very nice math behind it again, all the experts in the room know. And that's the problem. So I know how to solve 64 variables, six equations, no problem. I can solve it, assuming that only few of the coins are heavy. If all the coins are different weights, I'm in trouble. Now here comes really, really pretty stuff. Again, expert snow from the genius of our times, Terry Tao, who did all of this after getting a Fields Medal. Much harder than a high school problem, 64 coins. Some number of them have a different weight. Some are heavier, some are lighter. I don't know how many. But the only thing I know is that the number of coins with a different weight is much less than 64. Find me the coins with a different weight. A smart high school student cannot do this. In fact, one might have thought it's not possible to do for 200 years or so, I think, and Masimo can correct me. Turns out you can still do it. And you can do it in proportion to log 64 measurements. So different coin weights, I can still do it. And I'll try to explain on a board, because the ideas are simple. And none of my research matters. This is all much better than anything else I can talk about. So I'll tell you very in a cartoony way why this works. But let's just keep going with experiments. So basically, if I have a system with n variables, what is the lesson I've learned from my high school puzzle? If only a few things matter, it's stupid to measure one coin at a time. You want to measure in piles of coins. And in jargon, it's called measure on a conjugate basis, where the signal is not sparse. Any pile has a chance of having a heavy coin. Second, use this tau genius thing with the L1 norm and something called compressed sensing, which I'll explain in a cartoon in a second, to find the coins. And so just a bit of math with some cartoons. I told you that 3x plus y is equal to 0 or 17. Solve it for me. There is no solution, because the solutions lie on a line. So with 64 equations, with equations much fewer than the number of variables, your answers are on this line. And you want to find the answer without knowing enough. The only thing that you know is that a few things matter. So first, how do I find the answer? Because there is no answer. You could use Pythagoras theorem, measure distances by drawing circles, and ask, what is the point on the solution which is closest to the origin? I'm just going to pick that point. And so what you do is expand the circle till it touches the line. And so my answer is that one, inside here. But what you see is that neither x1 nor x2 are 0 here. So it's not a sparse solution. What you want is most things to be 0 and only a few non-zero solutions. So part of the Tao-Kandes trick, and some before actually from the Geosciences, was to say, don't use Pythagoras theorem and measure distances by x1 square plus x2 square, like you've done in high school. Measure distances by taking modulus. So if you try to plot mod x plus mod y is a constant, in terms of a circle, you get this turned square. And now you want to find the solution closest to the origin based on this distance. So how do you do that? Expand the square till it touches the line. And so you see immediately the line is touched at one of the sparse solutions. So you will find this answer. And I'm going to do this with C. elegans in a second. So we're going to use these ideas. Coins, instead of coins, you're going to have neurons. And L1 norm, no sparse measurements. Don't bother measuring one neuron at a time. Measure from multiple of them and try to find the heavy coins. So OK. And the key assumption is that the number of neurons that are important for a behavior are much less than the total number of neurons, just like the number of heavy coins are much less for controlling it. And as a trial, we said, OK, let's try to find the neurons that control the speed of the animal. So let's try to do that as fast as possible. So I want to give you some introduction, particularly the students, about some tools. Some of it was mentioned in the last talk. There are these genes that you can control by light. And one of them you can control with blue light and send in positive charge. The other you can control with green light and push out positive charge. So with blue and green light, you can control the neurons. That's part of what we did in the first experiment I showed you, where the animal can track. That's how we control the neurons. And again, happy to give you tutorials after all of the molecular biology is exceptionally easy. You take this gene, attach another piece of DNA next to the gene. That piece of DNA is an address code. So once you know where to find the address code, you can take the address code, put your light activator channel, put the two genes together, put it back into the gonads of this animal. All the babies will have this. So here I've just put in orange color into one neuron. So very easy to do. So if I put these light activator channels into the neuron of the animal that senses touch on the nose, once I have this animal, I can shine blue light. It'll feel like it's touched on the nose and run back. So I can control simple things already. So back to coins. We took this gene where we can inhibit with green light, connect it to an orange protein so that we can visualize it, took this gene, and started coloring our neurons, now known as coins, with orange color. And we're coloring it so that we get piles of coins. Remember, we don't want to do one neuron at a time. Two piles at times. So here are pictures of some piles with three letter names for random C elegance neurons unless you're an expert in it. You don't need to look at it. But the nice thing about this address book is that it'll always put the gene that you put in in the right places. So we have a whole bunch of animals. Different animals are expressing this gene where when you shine green light, those neurons are going to be shut down. And shining green light is like taking the coins and putting it on a weighing pan and weighing it. And I'll tell you what the weight is. So we have, I think, 29 such piles. Piles meaning a C elegance strain with a particular set of neurons with this channel where when you shine green light, you shut down only those neurons. So if you think about it as simultaneous equations, every row in this matrix is 3x plus 7, or x plus x1 plus x2 plus x7 equals something. I don't have the equal to part yet. So that's one pile. All the white boxes correspond to the neurons that are in that pile. So the x-axis is names of neurons. Y-axis is the piles, 29 piles. So the first pile has that neuron, that neuron, that neuron, that neuron, that neuron, and that neuron. So I can get this matrix. And I have 29 rows. So 29 equations, 114 variable. So with 29 equations and 114 variables, I want to find which variables matter, assuming that a few coins are heavy. So to measure the weight, what we do is take every one of these lines. Here are a bunch of mutants of one line, put them next to food, and shine green light. And look at how they behave. So here is one. So green light, you can see how they move, not quite like wild type, right, something. And the next movie is a bit more dramatic as a contrast. You can do this for all 29 lines. Here is the next one. You see, they're not moving very much. So all these different mutants give us very interesting phenotypes. Some go in circles, some go into the lawn, but don't stay there, come back out. But for our purposes, we're just looking at the speed of the animal. So the speed of the animal, or actually the distribution function of the speed of the animal, is equivalent to the weight of the pile. So you have a pile, which is a set of neurons in one animal, and the weight of the pile is extracted from this movie based on the distribution function of the speed. So here is the left-hand side of your equation. The right-hand side of your equation is the phenotype, which is the, for us, it was a Kulbach-Leibler distance of the speed between the two technical maybe. But you can get a phenotype, so you can fill this column up. Remember, these are your excess. I'm calling them w now, sorry. And I want to find out which of these coins have serious weight and which of them are the same weight as the rest, so I don't care. So I want to solve this equation. And how I do that is to minimize, this is just the chi-square error of the equation. This is just y minus mx squared. That's just the error. And this part is a constraint that says that the solution has to be sparse. So here, you're drawing those rhombuses so that most of the numbers that you get out for w are zero, and only a few of them are non-zero. If you set lambda to zero, it'll give all the neurons important, and it'll try to match everything. As you increase lambda, fewer and fewer neurons will become important. So lambda is a tuning parameter. The phenotype, the phenotype between the wild type and the mutant, the speed distribution, multiple measures actually work. You do this and ask which neurons are important. You get three neurons. And the point is, you can dial lambda, because lambda is a parameter when it gets upset since we pay for our sins by doing experiments. After all of this, you don't want to be wrong, maybe. So you dial lambda. And you see that over a whole range of lambda, as you increase lambda, the solution becomes more and more sparse. But these three neurons stay. The dotted line is a chi-square error compared to the phenotype. So you see, beyond this value of lambda, the fit to your data is awful. So maybe you don't want to be lambda here when your lambda is really large. All neurons are not important. So you want to be where the chi-square error matches your phenotype. And lambda is as sparse as possible. You get three neurons. So suddenly, with 29 piles of neurons, assuming that only a few things matter for speed, you can pull out three of them. And next, what you have to do is try to control these neurons. Any questions? Yes. Good question. Very good question. So you have to make an estimate of how sparse you expect your solution to be. And the estimate from Candace and Todd, there are multiple layers of answers, because it's a very good question. So let me give you the naive answer and a slightly more detailed answer. So you have to estimate what your sparsity parameter is. So you have to know how many neurons you expect. That's one. We don't know how many neurons there are, but there's a second part is that every one of these experiments take months. And I have to convince somebody to make these lines. So that's the more sort of obvious part. And third, not many of these promoters are annotated. Interneurons and C elegans are less studied. Sensory neurons you can always poke at. But you have this damn circuit with 100 neurons in there. Most of them don't have specific promoters. You can't address them individually. And there are only so many promoters. So we actually exhausted as many stable promoters as we could. If we had enough promoters, you see here, I can find sparse objects in log-in measurements. I can find epistasis, meaning pairs of objects in two log-in measurements if I had enough lines. So suddenly, it becomes interesting in the context of RNA ice creams and things which I won't get into. So next thing we have to do is check if our inference is correct. So I'm going to change gears here. So far, it was a molecular biology and behavior and some compressed sensing. Now we got three neurons. Who knows if it's right? And we picked speed. So all the other microscopes that we know of, which measure calcium activity in freely moving animals, put a cover slip on top. Once you put something on top, the thing is not moving freely. We want the animals exactly as they are on a dish with the bacteria alone, having them move freely. So let me tell you what we had to go through to do this. So we have to be, so we identified three neurons. We want to go shoot individual ones of them to see if we got the correct ones. And I told you just now, I can't address them independently so I have to do some optics, remember to go shoot the neurons that I want and turn it, as I said, into a video game. So this is what we need to do. Completely unconstrained animal. The image of the animal, transfer the image to a computer, and this is a very big deal, by the way, process the image, do all the hardware control to move the stages and everything else. And we also have little mirrors that we can adjust to shoot light and repeat. And we want to do this for an hour. We want to do it as long as we want. And so if you make a single mistake, you're in trouble because you've lost tracking. And the best part is that we have to do all of this in four milliseconds, so that's all the time we have. And so you barely have time to expose the image and catch photons. And if you get the most sensitive camera on the market, it takes 10 milliseconds to transfer the data from the camera to the computer. And by that time, in that time, we should have done two and a half times all of these steps. So we had to sort of do stuff. And let me just show you for fun, a movie where we don't track the animal. This is at 63x magnification at which we're operating. The movie will play very quickly, so you need to focus. That's it. So if you don't track the animal, that's what all you can do. So we built a microscope with field programmable gate arrays and lots of stuff. The nice thing about all of this was it was fun because in four milliseconds, you don't have time to do multiplication. You can only do addition and subtraction. So all your processing has to be done with addition and subtraction. No rotation, no nothing, but we still wanted rotation stabilization which we could do with hardware. And also as I said, we wanna do it at low power so we can image for an hour. We wanna be able to really stabilize the animal. And so what we did was separate the stabilization and the imaging problem, and I'll just show you movies instead of boring you with the details. What we did was in the head of the animal, find one neuron with this body and this process. And I'm gonna show you a movie in real time where we're stabilized. This is the image of the animal here. This is the image of the cell body of that neuron in two copies and if you catch me after the talk, I'll tell you why there are two copies. We're doing Z tracking without a cover slip and we can do tracking in X, Y, Z with one micron accuracy. This thing here is one micron, the width of this object here. We can do rotation tracking with about 10 degrees accuracy and all with a four millisecond feedback loop and simultaneously shoot light wherever we want to. So here is a live movie. Animal is moving completely freely on a plate. We've stabilized rotation, so you see the nose is always pointing up no matter where the animal looks. Z is precise, everything is good, and we can start imaging neurons. So this animal is moving completely freely on a plate, but it's always pointed in the same direction. And then there are these beautiful things called liquid light lenses. You can change the focal plane just by pumping liquid in and out. So you pump liquid in and out, change the focal plane, you can do a Z scan with just one objective. So here is a movie with a Z scan. If it plays, see, you can see it look through the worm. Yeah, so it's all doable, and if anybody wants to do this, and we started using this also for our stem cell things. Where am I? Okay. And so simultaneously, as I said, we can shine light wherever we want to. So we can precisely target light wherever we want to. You see only one neuron lighting up because we have pointed our mirrors just right so that we hit that neuron. And hitting a neuron is a joke now because this image is so stable on your camera, I just say hit that and it'll hit it. There is no hurry to hit because everything is stable. So with this, we can do any kind of time series experiments, hit the neurons that we want. I won't bore you with three letter names, but these were the three neurons, and we can shine light on specific ones and start looking at speed changes. And we see that every one of them, when we shine light specifically on that neuron, it changes speed, which tells you that this L1 norm inference with 29 promoters looking at 114 lines got the answer right. And we've done negative control to make sure there are no other neurons sporadically looking at one of these 114 neurons one by one, but it starts getting you the answer. So basic message is that don't need to measure too much. Try to measure less. And if you think only a few things matter, then you're in business. And the other point is that philosophically, at least a few things don't matter, many things don't matter, then maybe you forget about it because I don't know, what do you do? You can't drug, medically not relevant, you can't control it, controlling 17 neurons in some linear combination of the right weights is like somewhat depressing. So I would almost say if this is the answer, it's worth looking at. And the other point is that once you have these control nodes, you see everything else above and below are complicated, but you know what to focus on. And then you can see how sensory information comes into these nodes and goes out. So again, we've done lots of calcium imaging because we can stabilize it so well. And both in terms of correlation with speed and change in speed by shining light, these three neurons stick out. All these are other control neurons including some in the literature that were thought to be important that are not. And it turns out that this little circuit is interesting. There is one neuron that acts like a switch. When the neuron turns off, the animal comes to a standstill, turns on and moves. So this is a switch. There is one that acts like a dial. So you can dial it up and down. And as you said, you can tune the level of light or the frequency at which you blink and you can change the velocity. And another one which is a rectifier. So this is only for moving forward. So you have these three little neurons connected together. It turned it off. There is another neuron which controls backward but doesn't control speed. So if you want to go backward, you have to turn this neuron off, another neuron on and control these two for speed. These two control speed, whether you're going forward or backward, forward is much better, much better, much better. So the kinds of experiments we want to do eventually, which is what I'm coming to for speed control as to why we got into all of this besides trying math out. I showed you this movie with this tracking. The way we did tracking was to control another set of neurons, most importantly AIY with the right dynamics, okay? So there is one that's controlling tracking of the gradient. There are some that are controlling speed and presumably together they're controlling chemotaxis. And since we're doing everything with light, I don't have to make the signal clean. I can make it noisy. So then I can ask, does the worm know central limit theorem? Does it slow down and average the signal before it goes ahead? So suddenly with optics, all of these possibilities start being accessible. And to average and move slowly, it has to control these. So then you start building up circuits and then you can start asking, how does the sensory neuron feed into this part if you're really serious about C elegans? But for us, it's been much more about using C elegans as a rapid system to see if our ideas work and then move back to, I have a schizophrenic lab, the human part. So in conclusion, I think these high dimensional data problems are really fun and exceptionally difficult. And I'm hoping that with Masimo, we sort of start thinking about them. These are really, really difficult. And I think when people say, we're going to measure everything and somehow the answer is going to magically pop out. There is no argument against this back of the envelope calculation saying, that's not possible. The hard problem here is to know what subspace to look at to find your signature. If you don't know the subspace, I mean machine learning, as you know, machine learning is supervised learning. This is all unsupervised. So you have to find this unsupervised and that's an exceptionally hard problem. Making experimental predictions is like a whole other deal. Can't even analyze data when making predictions. If I could say, do this, this, this and you will get a robust eye in a dish. I'll show you the slide at the end, just for you. That would be great, right? The problem, for example, I'll tell you with the stem cell systems is the following. People say, oh, stem cell systems are great because they recapitulate human development. So how would you do with a stem cell system? You try to build things and hope is that they self-organize and build as they want and then you try to understand how is it built? How do you figure out how it's built? Traditionally, you break things and see when it breaks. But if the system breaks all by itself, 50% of the time, you can't use any of these stem cell systems to understand disease really. That's the problem. So if we know how to build precisely and control these systems, I think it would have impact, but again, the problems are really hard and it'll be fun to work with any number of you. And finally, I'll show you some movies. We're getting up to genetics with development going to so we can shine light and control signaling pathways and human embryonic stem cells. Here is one where we figured out that we can actually shine light. These four cells have a specific protein. Locomotion, we can make a mock. So we can sort of make them go over. So it's, and they start changing fate. So it's getting there. I mean, so yeah, it's mostly frustrating, but it's getting there. With that, I'll take questions and thank you again for your attention. And if you have any basic questions of any sort, none of this is difficult. The only difficult part here is the microscope burling. Everything else you should be able to understand on a blackboard. So if you had any questions about anything, I'm very, very happy to give you a tutorial on a board. Thank you.