 Hello, welcome to the last session, this wonderful symposium. My name is David Alonso, I'm a geosophila geneticist, I want to understand how the brain of the fly works. And it's a real pleasure, really, to have you here today, with all of you. I wish to first say thank you very much to Neon for sitting here, taking pictures, doing all his stuff, for having done this. I think it's a great occasion to celebrate, who many of us have met, and it's such a pleasure to have somebody like Mike around the corridors talking to us, and just sharing his time with us. As somebody said at the beginning, science has to be fun, and it feels just such a great example of that. So the plan is that we have two sessions, and after that the floor will be open for comments. Many of you who wish to contribute personal notes or comments in different ways, which you want to celebrate, Mike. So, good luck for the view. Tom Babin is going to tell us about the evolution of computation in the brain, insights from studying us. So Tom. Yeah, so thanks, it's been a wonderful day. Let's see what we can add to it. So we've heard a lot of things about Mike. One thing that we didn't hear is that he was not at all afraid to shape himself a bit, and come up with ideas that at the time seemed a bit odd, but maybe that in time ended up being right. And I don't know if this is going to be one of those ideas, but I'm going to try this tradition to shake things a little bit myself by telling you that color vision isn't real. So, or slightly more precisely, I'm going to tell you that I think color vision is an epic phenomenon of the eyes evolutionary history in the sense that, of course, color is there and we use it and we see it. But that is a byproduct of how the eye works and how it evolved. So why would I say that? So here is a raccoon. And the raccoon can tell us what we think about how eyes work if there's no color. Because raccoons only have one photoreceptor. And because of the principle of unit variance, meaning you can't tell apart wavelength from intensity, if you've only got one, that guy is colorblind. Therefore, all of its vision is grayscale. And therefore, all of its visual behaviors are driven by grayscale. But then, because animals are wonderful, some animals have more than one photoreceptor. For example, it's dark, right? It's pretty much all the mammals as the second one. And now you have a choice. You can take those two photoreceptors and you can combine them and arrive again at something like a raccoon. Or you can contrast the sequence of these two that gives you spectral information, which we then turn into color vision and that may or may not reach to specialized behaviors that depend on color. So this is a sort of textbook view of how color vision works. Okay, so this is two currents. And then we've got things like us. We've got three. So what does the third cone add? Well, the general textbook idea there is that color vision is a bit nicer. Yeah, so we see some more colors. The color resolution has improved. And then we get other animals that have even more colors. And the color vision gets improved. So the purpose then of adding extra cones is just to make color vision better. And sure, that's a nice thing to do. But I think that's a pretty heavy investment in photoreceptors if it's just to make color vision a little bit better. And I think that's not what it's for. So what I think is correct is that the presence of multiple cones enables color vision. But what we always do, and this is a typical thing that we shouldn't do, and we know we shouldn't do this, is we translate that into the idea that therefore the purpose of having multiple cones is to enable color vision. And I think that bit is wrong. I think the purpose, the original purpose of having multiple cones is to enable vision, not color vision, specifically in the water, because of course all vision involved in the water. So if we take that idea, we can take our little contrast connection here, and we just feed it back into the visual system. This is just polar vision, which uses both combinations and contrasting of photoreceptor signals, with all core behaviors. But because one of the features of the cones is that they are spectrally distinct, inevitably that generates spectrally interesting segments down here, and that then leads, if needed, to color vision. So when we then think about what the cones do, what I'm asking is we need to move away from the idea that the main thing that distinguishes the photoreceptors is that they respond to different ways, like the flight. Yes, they do, but that is not the main thing. That's not the main reason why we have them. We need to think of the cones as feature channels, and hopefully I can, in the next few minutes, convince you that this is a reasonable way of thinking about them. I will talk almost exclusively about Zebrafish here. There's a reason one, we work on them in the lab, so that's obvious, but the other reason is that Zebrafish are actually a really nice representative species that we have access to today that has an ancestral-like state. An ancestral-like in the sense that the visual system that the Zebrafish has is probably not that different from the visual system that existed way early on in the Cambrian explosion than the first isopropagrids or cranias at the time people. Some of the features that are very similar to what we think those first animals were, for example, Zebrafish have four ancestral-like systems, that is the four ancestral set. They are diurnal, they should be available during the day, not during the night, much easier to do. They live in shallow water, again they're more light, like our earlier analysis, they're small, actually if you look at the record of the first craniate species that had eye holes, they are the size of Zebrafish, so it makes a lot of sense that we're looking at a species that pretty much the center, and of course they have interesting behaviors that are pretty complicated, for which you need to use eyes in interesting ways. For example, they're predators, they eat stuff, that moves, but they also prey, they get eaten, and that certainly was also the case back then. So let's then think about what light does in the water. So light comes from the sun, not from the atmosphere, hits the water surface, and the water, light and water are not friends, basically light, water gets rid of light in the sense that scatter light, you lose it, you absorb it, you lose it, meaning that even if you're right under the surface, the spectrum of light that you have available for vision is not the same as the spectrum of light that you would have available for vision, if vision would have evolved above the water, which it didn't. So we wanted to understand, first of all, what is the spectrum of light in the Zebrafish natural habitat, and it's a bit dim here, but it came out, so a while ago we went to India where they are endemic, and we measured it, and this rainbooker, which is very faint here, is pretty much the light available where the fish live. And what I've plotted here is just a number of components, so kaya means brighter, and energy. Energy is the reciprocal of wavelengths, wavelengths are usually what we plot here, but energy I think makes a lot more sense in this case. And then what I've plotted on top of it is the measured tuning functions of two of the photo receptors in the Zebrafish, work from Takeshi to Karish here, and hopefully you can appreciate that the red one, which is the other one here, is one cone, basically captures all of the light really quite nicely. That's a brightness channel. The UV cone captures only a small little bit of that light, and it's very specific for you, the red one is not very specific at all. So that's important. That means that this guy gets more photons than that one. But the photons that that guy gets, the better photons, the high energy photons. High energy photons are easy to pick up, low energy photons are hard to pick up, because you run into thermal noise. Okay, so based on that, let's just put the dots here. So the red cone sits somewhere here, and the UV cone sits somewhere there. We can turn this photon axis effectively into a speed axis, because the more photons you have, the less time you need to integrate in order to get a usable signal. So if you get lots of photons, you can afford to be quick. Don't get many photons, you can't. Okay, so therefore this is slow and this is fast. And convergent evolution will do this to any photoreceptor. If you look at the photoreceptor inside or outside of eyes, they will just evolve to be that way, depending on the wavelength or depending on what type of photons they get. The energy axis, we can turn it into a gain axis. It makes a lot of sense if you think about it, that these are the good photons, they're really easy to pick up, they're high energy, and therefore we can basically, we can crank up the gain axis up all the way up to 11, and these guys are fine, they don't start going snowy, you don't get noisy. If you do that for the red cone, you get a snowy image, it's bad. You can't crank up the gain for the red cones, because the photons have lower energy. So intrinsically, we have a fast low gain system, and we have a slow high gain system. Okay, this is a photoreceptor in isolation, but of course where it's up the stone, sit in isolation, they sit inside the eye, within circuits, and the circuits do useful things. The first thing that happens with the photoreceptors, it talks to each other, and it talks to the horizontal themselves, by an horizontal, that's to each other. And if we look in the live eye, we find that these circuits, they take these intrinsic differences and they're pulled on them, they're pulled them apart. So for example, if you take out the horizontal cells, and you look at your beacon, your beacon goes slower. But if you look at the horizontal cells at the red cones, we take out the horizontal test, it gets faster. So this is just the way that the circuit is calibrated. Now, so we move these dots apart a little bit. But then of course photoreceptors don't just in time or gain, they do space as well. And the classic text of your photoreceptor is that it has a center activation, which is where it's out the segment sits, and it has a surround activation, which comes from the horizontal sets and they bring cones which are calibrated. So, and this is clearly the case here in red, this is data from Takeshi, so we play a bunch of bars here, we're recording from the red code in the live zebrafish, and it doesn't respond, except here, because that's the position where it's set, so it has a narrow center. And then we slip the bars, and we look for the surround, and depending where you put it, you can get strong surround effect. So it goes up rather than down. So clearly this red cone is behaving exactly like the textbook, the center surrounds those small centers for the strong surround. Okay, let's do this experiment in a new UV photoreceptor, doesn't work. Okay, we're getting a big center. This cannot be explained by this classical model. Also what Takeshi tried, he tried quite a lot is to do this experiment, try to get some sort of desperate surround effect out of the UV cones, it doesn't work. Okay, these guys are big blurring pictures, these guys are nice center surround humans. Okay, so what we have then is we have here a system that's brilliant at detail, right? It's quick, it's small, center surround, but it's low gain, and basically it has all of the information that you need to drive vision, but it's not particularly accessible. This needs processing before you can do anything useful with it. The UV system is a diametric opposite, it's slow, it's got terrible resolution, but it's got really good gain, but really good sensitivity. So it's basically a heavily pre-processed high sensitivity channel that gets picked up in parallel to the red channel. And these are very different types of seeing the world. But of course they are both photon counters because they're both only activate when there's photons in the right wavelength, they don't inactivate when there's photons in the wrong wavelength range. So we've got two brightness signals that work in this kind of way, but then we've got the middle two cones. And a while ago, I actually just showed that the two middle cones are completely different, rather than just sensing the brightness of the number of photons, these guys have spectral equivalency. That means that depending on the wavelengths that you use, they go up or down with the activity. And the spectral equivalency is actually so strong that if you give them white light, the up and the down cancel and you get nothing. So these are not brightness sensors, clearly they do something else that these guys. Now, because they are color opponent, the classical textbook idea becomes that these are color-sensitive neurons, they are poor color rigidity. And yes they are, but what is color in the water? If you near the surface, you get a big nice rainbow, the deeper you go, you lose the rainbow. Why do you lose the rainbow? Like it's increasingly monochromatic with distance, meaning that if stuff is far away underwater, it has no color as far as you're concerned. You can't see the color. Whereas if stuff is near, stuff can be colorful. So color underwater is distance. And I think this is a very striking picture that illustrates this. There's nothing magic about this reef here, it's just the reef photographed in the sun. Stuff near you is colorful, stuff far away is not. So if we think about these color channels as distance channels, because of course we're visually involved under conditions such as this, and I just want to illustrate that this is literally what happens. All I've done here is I've taken the red channel and I subtracted an equivalently processed blue channel and you basically get a foreground effect. So if we think of this guy as a distance one, I think things start to become a little bit more sensitive. I haven't put the green one there. The reason I've put it is because there's one extra twist. And that is that the equivalency that Takeshi already described between red and green cones, and this is now looking at the green point here going down, is not just the exponent, it's a time shifted equivalency. There's a delay. Somehow the circuit implements something like a 20 millisecond delay. No idea how that's done, but that's the phenomenon that we get. So what does this distance plus delay do? Well, it does motion. It does a foregrounds enhanced motion computation. It's not a directional motion, but it basically tells you stuff is happening over here. So what exactly these computations contribute, of course, is still something that we need to look at in detail, but it's quite clear that we've got two brightness channels and we've got two pretty complex foreground enhancing sampling channels in the eye. So if we then think of the four ancestral photoreceptors as feature channels, much like we think of feature channels for pretty much the entire rest of the visual system, I think things start to become a little bit more clear. Now, if we go with this and we then take the four photoreceptors and add it to the rocks which came later into this black box model of vision B combined in contrast to the vision B bright behavior, of course, that's not particularly satisfying. It's just another black box model, right? So let's unpack the box. So the first step to unpacking a box is to think about the brightness channels. And what I want to suggest is that the red and the UV photoreceptors, which is the LWS and SWS tone. So these are the ones that humans still have. They're basically two systems that give you a high resolution versus sky sensitivity. Each of these is primarily responsible for some behaviors. Of course, there is cross talk for certain functions, but this is basically the core of vision and by the core of vision, I mean all of vision. And why do I say that? First of all, they count all the photons. So let's go and start. If you look philogenetically and don't worry about which species you are, this is a mammalian bias philogenetic tree of all vertebrates. And hopefully you can appreciate that the red cones and rocks here, they're pretty much always there. The UV cones are almost always there, but the middle two cones seem to be quite optional in the big scheme of things. If we compare between the red and the UV system, it's the red and I don't hear the words for the reason I don't have time to go into it, but I'm thinking they form the same system for a highlight and low light. You can ask me later if you want. If we look across this, for all of these three cones, you will find a species that had lost the cone with no replacement. For the reds and the rods, you will not find the vertebrate that has lost both and can still see. From that you can conclude, I think, is that vision is red cones and that get extended by rod and in some extreme cases, like a deep sea fish, you can actually get rid of the red cone because you've got the rock to rescue you. And there's also the vice versa and some diurnal lizards like a chameleon that don't have rods, but they have the red cones. So vision with our red cones and or rods is not happening in the vertebrate. UV cones is almost the same case, except there are some quirky cases that have lost the UV cones for the left and right. For example, whales, large aquatic mammals have lost the UV cone, but it's a weird quirky thing. Now, what then of the other two photo receptors? They're sitting in the middle that seems to be quite optional, but what I'm going to suggest is that they're not for vision. They're for regulating vision. So the system in the middle that does complicated things, which talks to the primary vision systems in order to regulate them, to make them better, to tune them, to enhance them in some interesting way. Now, how do we test this? So here's adult zebrafish. Adult zebrafish has the four cones and the rods. Problem, of course, is that the rods and the red cones in my model go together. So if you take out the red cones, you've got the rods to rescue, you can't really test anything. But nature is our friend because the baby zebrafish, which I actually much easier to work with in terms of physiology, they don't have the rods yet. They're not developed, meaning that we've got a full cone system. And because it's a baby zebrafish, and we've got all kinds of interesting tools, including importantly the genetics, we can just mess with the system, manipulate it, and we see what we both would get. So a long time ago, and I don't want to go into this at all, we showed that in a series of papers that the UV system is extremely strongly associated in these baby zebrafish with prey capture. Basically, you take out the UV system, those animals don't eat, you need to give them light for them to see the food, gather, gather, gather. So UV, so this system is certainly for prey capture in zebrafish. It might be further things like human bones, they have tested it, whether that's something for the future. But of course, that leaves the other side, the middle and the red, sort of a little bit in the lunch. So let's look at those. So this is now data from Chiara, she's using a technique developed by Philippia, basically looking at this is a lava zebrafish brain, we've got all of these little neurons, each of these neurons is one of the neurons of the zebrafish brain. We're scanning here the two photo microscope in the head, and this is the part of visual space that's aligned with the stimulator that is placed on the side. This is a stimulator that can stimulate, like a projector, but with all of the colors that the zebrafish can see, which includes UV. So here's the experiment. We scan from that regime, we look at all of what all of these neurons do, and we present a bunch of stimuli. Some of the stimuli are simple, some are complicated, but they include importantly moving writings, which is the sort of thing that you might expect to drive white-feared motion behaviors of the motor response, of the kinetic response, the self-severe inflation, that sort of thing. And we place them, we move a bunch of dots, and the bunch of dots would be something that would trigger pre-capture reflexes. But it also includes other stimuli, and then we plot what we've got here is the strength of each neuron in terms of the granting response. This is a strong writing response, this is none, and this would be negative, and this is the dot response. So first, we get a lot of neurons that don't respond to granting all dots, and these are again the middle. We're going to ignore those. Now, then we've got quite a large number of neurons that respond strongly to the gratings, and these are summarized here in the histogram, and we've got a handful of neurons that respond to the dots. It doesn't come up with a contrast, but trust me, there are some here, and we've got a histogram here, and we've got a handful of neurons that respond to those. That's a controlled fish. Okay, so what then should happen if we take out the red cone? I think it's quite a striking effect. So we keep the dot response, but the grating response is completely toast. So these animals can't see the gratings, or they at least don't encode the gratings at the level of the brain where we think they should be processed. But then when you take out the green cones, where the gratings are still there, in fact, the grating response is enhanced, and we've lost some of the dot responses, and I think what's happening here is that some of the dot neurons turn into grating neurons because of a lack of regulation. Then if you take out the blue, the blue's look a little bit like the controls. So maybe they're not so critical for this, but I think where it gets really interesting is you then take out the green and the blue together, turn the fish into a mouse. You basically get no neurons, very few neurons that don't respond to either, and most of the neurons respond to both stimuli, and especially when you get a dot over representation. So basically, you take out this regulator, what I call the regulator system here in the middle, and the visual system disposed bananas. It just responds to whatever it wants to. Okay. So how does this look at the level of behavior? I don't have time to go into this in any detail, but we look at locomotive behavior, and here are some patrol animals, here are the animals when we take out the red cones, they're terrible. And if we compensate for spontaneous behavioral defense that I don't have time to go into, we get no effect in the other two. So it's a very systematic effect that happens. You take out the red cones, the optimal response is terrible, you take out any of the other cones, not much happens. So we do have a lot more evidence that goes in that direction, but rather than talking about that, I want to also talk about the other side of the story, which is how do you build a system that even achieves such a separation between cones and behavioral output. So really what you need to do is you need to take the system here, and you need to map it onto the retina, because that's where it's going to happen. Okay. So if we do that, what we can do is we can map the different retina layers conceptually onto different parts of the model. So the behaviors, let's call them the gang insers and beyond. Let's call all of this in our X, Y plus alpha, right now, of course, photoreceptors, photoreceptors. So if we, and yeah, this is a key thing. So jumping back to the beginning, if the cones are feature channels, but they are spectrally the state, and we know the spectral sensitivity function of the photoreceptor in the live eye. Therefore, if we record any neuron downstream of the photoreceptors, and they have the same spectral sensitivity function as one of the cones, then a reasonable guess would be that that circuit is mainly driven by that cone. But if you get a special sensitivity function that requires mixing those, that's a signature of an interaction between the photoreceptors, right? So based on that logic, we can already, based on what Takeshi showed in the study and Judith's earlier, can already surmise that there's a fundamental interaction between red and green blue up here in the outer retina. Because if you record the synaptic output from these two guys, you don't get the intrinsic photoresensitivity that you would expect based on the obstinacy there, but you get this equivalency in the equivalency specific red, it's not a UV equivalency, right? So you've got a red versus green blue interaction happening up here. But what's happening down here? So this is a nice video that Shin Wei here generated. This basically took a bunch of Bacola cells, so the image Bacola cells in the live eye, because it's now looking here at this layer, looking at these neurons. So each of these blobs is one of these synapses here. And we are just playing a bunch of colors, right? So you can see it's a green flash, and then comes a blue flash, and so on. And hopefully you can appreciate that, depending on the color that we flash, you get different layers activated, okay? And it's importantly realized that this is not just an on-off thing. The wavelengths matter. Different colors will activate different layers. Therefore, the inner retina is spectrally layered. And we've done a lot of work trying to tease this apart in detail, but I just want to give you a sort of punchline overview of how that looks if you just make it static. So all I've done here, and this is not the quantitative way of doing it, is I've taken the right responses, not color them red. The green responses color them green, and I've taken the UV responses and color them green. And that's what you get, okay? So you get a ready, creamy, yellowy layer up here, then you get a lot of short wavelength stuff in the middle, and then you get another yellowy-greeny something in whatever is happening down here, okay? So you've got long wavelengths, short wavelengths, long wavelengths, okay? Why is this interesting? Well, the nice thing about these guardian cells is that they have a tendency of stratifying in particular layers of the inner retina, but they also have a tendency of projecting to particular bits of the brain, and we know which bit of the brain does work, because that's what ZebraFish are being used for in the vision community very much. People look at the brain and they go, okay, this bit of the brain runs up the motor responses. This bit of the brain runs pre-culture, yeah, yeah, yeah. So what we can do is we can take the projection pattern in the brain of a guardian cell as a proxy of what that guardian cell does, and then we can check what that guardian cell looks like in the eye, and there's a very nice checkbox that does this, yeah, it comes up in a minute, and we look like, we check what that looks like in the eye, and then we get a second guardian cell, which has a second behavior, innovates different bits, that will go to a different place, and then we can start basically making a behavioral prediction which layer will be, which behavior, okay. So now I just want to show you that for two big behavioral repertoires that the fish has. One is right-field motivation, but this is all of the stuff that we expect the red cones plus maybe the green glues to achieve, and these are the dendritic profiles of all of the most of the guardian cells that are associated with these behaviors, and I hope it's quite clear that the inner set at the top, or in the bottom, or that you've got, yeah. The pre-capture ones look like this, so they're slotting nicely in between. So, we color code these, we call this the pre-capture zone, we call this the white-field motion zone, we put them on top of each other and compare it to the spectral layering of the inner retina that you get in the inside eye, and I think the correspondence is quite striking, yeah. So there seems to be a fundamental way of wiring the photoreceptors using bipolar cells into the inner retina, so that they form particular layers, and the layers are already predetermined to drive specific kinds of behaviors, okay. But there's of course a missing puzzle piece, because it's got the bipolar cells doing all the clever things, and ganglion cells doing all the clever things, but most of the clever stuff that the retina is supposed to be doing is amacrine cells, right. Amacrine cells are these neurons that I haven't schematized in, yeah, they sit and they modulate all of this in complicated ways that no one's there to properly study, because it's just so complicated. So, but what you can do, rather than going one by one to the amacrine cells, which would be a pain, is to just take them all out in one go using pharmacology, because we know which neurons with us they use, so we block them, and if she blocks the amacrine cells, you get that. So notice that it's not all that different, yeah. So what that really tells us is that whatever the amacrine cells are doing, they're not doing that, okay. With one little exception perhaps, is that down here things have gotten a little bit less yellow, and it turns out that if you check which, what is the, how can we explain the spectral response of the amacrine cells, the vast majority of amacrine cells are explained exclusively by right inputs, we don't need any other code. So amacrine cells are not for any of that color jazz, they are for queuing the detailed channel of the retina. There's a handful of amacrine cells that also need to be input, there's a handful of amacrine cells that flip right versus UV, and those are the only major flip that we see in the retina, and really the green, the blue comes down, contributed very much at all to the amacrine cell processing. So we can then surmise and then say the big picture of the inner retina then is that we take red and actually green blue together, we show that in this other paper, and we contrast it to UV. So we can start to not tease apart the computational logic of how you can take these photos and make them do different things. We've got this interaction up here, we've got this interaction down there. Now, I am out of time, I'm sure. Which is why I will not talk about any of these animals, but we do have data on all of them, and I would invite you to ask me later. So for this, I want to acknowledge the people that have done the work, which are all of these wonderful people, these are the funders, and I want to thank you all for your attention. Any questions? Yeah, Tom, underwater, the sun is out and you're on recently shallow water, you've got a problem with plastics, because everything is moving all the time, that really messes up optic flow and object detection and everything. But it turns out that that plastics is not really very much, it's not very spectral, it's just the intensity that goes up and down in an unpredictable way. So if you have a counter-opponency, you could actually basically cancel out that caustics and you could fix a major problem. That's exactly right, yeah. I didn't pay a damn for that question, but that's exactly one of the, I think the key reasons why you need an interaction between a spectrum and a non-spectacular channel in order to drive motion vision. So if you take that away, then your fish is going to respond to caustics in the water, which are basically rip brightness records on the ground and it's just going to go crazy. So you need to know that this is a achromatic versus a chromatic change on the ground in order to not get confused. Yeah, that's brilliant. I mean, you've really stirred things up. But of course, when you use to make a big soup and throw everything in, some of the ingredients are going to be dropped there. I'm not sure that you will use this channel really to try to pull in my sensitivity because the dumb noise is irrelevant in bright light. And basically the reason why they use larger responses is so that they produce the same category voltage response as the green and long way focus centres, which are transfusing more pertinent to each other. That's what happens in flies and running time. But they live above the water. What? At plays and running flies live above the water. It doesn't matter about water. It's just the question of normalizing your games and the rest of the show. The real point about the new channel is that it does have a different context, which not available in green and not that the long the way it is. And the reason why it's blurry is because there's chromatic aberration. So you couldn't have a high resolution green retina and a high resolution new green retina working at the same time unless you tiered them. It's my fault. Yes, I agree with that. Aside from, we should discuss this logo. But aside from that, I don't think you should be allowed to get the work done. Just on the chromatic aberration. That is so, as you say, the zebrafish with the tiered layering of where the photoreceptor sit. And the eyes are so small that the chromatic aberration in the zebrafish eyes basically make it logical for that. Thanks. I mean, there's a lot to talk. I was wondering, are there other cones, these different types of cones distributed differently in the retina? And do those distributions and they are different? Do they match the kinds of tasks that you think that they're performing? Yes, yes, they are. So the before the receptor is specifically in the baby's never fish, it's certainly strongly localised to what we termed the acute zone, which is basically a phobia like region that looks forwards and up. And the fish hunt forwards and up. That's how it is used. So that makes sense. The blue cones go along the horizon, mainly the green cones and the red cones are pretty symmetrical, but not perfectly. They have a tendency for more down than up and probably because the ground is more interesting in the sky. So it makes sense. Thanks a lot.