 I'll check in and I think it's okay. And hello and welcome everyone. Welcome for another virtual vision seminar. It is a real pleasure for us to host every week, engaging people, studying retinal processing, visual neuroscience, and in today's case, vision ecology. So thank you for following this talks. We are really happy to have such a regular and interactive audience. Also, if you haven't done so yet, do consider subscribing to this channel in order to receive notifications on any talk you may have missed. Now I would like to remind you as usual that Peace Talks series is part of the worldwide neuro-initiative. So if you have not done so already, take a look to other neuroscience talk series. You will find, as usual, all the relevant links in the descriptions. But today, today we are glad to receive Dan Erik Nielsen from Lund University in Sweden. Dan obtained his PhD from the University of Lund in Structurozology in 83. He then flew across the world and started further position at the Australian National University of Canberra within the Department of Neurobiology. And then Dan returned to Sweden to start his own lab at Lund University. And then he founded the Lund Vision Group, which is an internationally leading center for comparative vision research. It also earned the time that he was joined by Erik Wallent, who received last April and he talked to us about Australian Bogumov. Dan is now a professor of functional zoology and a chair in zoology at Lund University. He's a fellow of the Royal Swedish Academy of Sciences and also to several other academic societies. But he's mostly known by students and researchers within the vision field for having co-authored the book Animal Eyes with Michael Lund. So Michael Lund's an emeritus professor from our very own University of Sisex. He brought a lot to visual sciences and sadly we saw him passed away last December. So now the Lund Vision Group has eight principal investigators and about 40 people. According to the website, they form a cheerful and creative research environment, which I have believed. Their expertise covers most aspects of visual ecology and high evolution across the entire animal kingdom. We actually hosted a former member of that group last April, Michael Bock, who told us about funworms and mantis shrimps vision. To conclude, Dan connects many people in visual neuroscience and it's a very great pleasure to receive him today. So hello Dan, thanks for accepting this invitation and how are you doing today? Good, thank you. Thank you very much for this nice introduction, Maxime. I'm happy really, proud to be able to give a talk at these webinars and I think the first thing I should do is to share my screen. No, Gersa. So, did that work? Right, good. Yeah, so in this webinar or this little talk, the idea was to give you some idea or some sharper views of what eye evolution or about the reasons, about the sort of why, how and when of eye evolution. And let's now we'll see. That works better. Yeah, I'm in control of the presentation as well. But I'm not just going to talk about humanize. In fact, I'm not going to talk about humanize much at all. I'm going to complicate the matter by looking at eyes and visual systems and photoreceptors across the entire animal kingdom. You all know, of course, that we share roughly the same type of eye with other vertebrates. So we find very similar eyes in other mammals. We find similar eyes in birds, in fish, in amphibians and so on. And these eyes that we normally think about are lateral cephalic eyes. So they occur in pairs and are situated on the head. We find similar well-resolving acute eyes also in other groups of animals, such as, for example, cephalopods. That's octopus and squid and cuttlefish. That's what you have to the lower left. But also in arthropods, such as spiders. Although spiders don't just have one pair of eyes, they have, in fact, four pairs of eyes. Also insects and crustaceans and the long ago, since extinct trilobites had well-developed compound eyes placed laterally on the head. All these examples are of animals that rely a lot on vision. But they're also animals that have much worse or lower resolution eyes that are placed on the head. In fact, many more groups than those that have sharply focused eyes. You find lower resolution eyes, for example, on a coference or velvet worms, lower left, in gastropod snails, in polychaete worms and in flat worms. But I could list many other examples. All these eyes are still lateral cephalic eyes. So they are eyes that sit in pairs on the head. They're all eyes that are placed in the midline of the head. As you all know, probably know about periodal eyes of some lizards, which correspond to the pineal organ, or pineal photoreceptors in other vertebrates. But insects and arthropods often have median eyes as well. And we've got here a flat worm, which has got a median eye in the center. Actually, all these three examples have median eyes in addition to their lateral eyes. Some animals, like this little copepod crustacean, has got a median eye, but no lateral eyes. It's just got a median eye. But they're also eyes, then, of course, that are not located on the head, especially in animals that don't have a head at all, not much of a head or have reduced their head. Bivalves, scallops and clams, that's an obvious group. They go back to molest ancestors that were most likely worm-like and had the proper head. But they don't have a head or much of a head anymore, and their cephalic photoreceptors have been reduced, and instead they've made new photoreceptors and new eyes on the mantle edge. That's the edge that actually looks out when the scallop or clam is open. Sometimes, in some cases, there are beautiful mirror eyes, as in this case, of the scallop here, or it could be, in fact, compound eyes, as in some arch clams. There are also eyes on the tips of starfish, at the tip of each arm in starfish, a group of little photoreceptors that forms a small compound eye at the tip of each arm. Jellyfish, even jellyfish, have eyes, even though they've got no head, but they've got little sensory clubs between the tentacles and these sensory clubs are equipped with a number of eyes, some with lenses and some without. Another group, fan worms, have made new, really remarkable eyes, although they're not strictly noncephalic because they have developed on feeding tentacles that are, in fact, outgrowths from the head. So, in principle, that's just a new type of cephalic eye. Apart from all these things, we also have dispersed photoreceptors. So, many animals have an array of photoreceptors covering their body, and you would think that that is just for detecting light, in general, if it's light or dark, but it turns out that they actually can do a lot more. Sea urchins have little photoreceptors at the tip of their feet, not the spines, but they have little suction feet around their body at all directions, and these are equipped with photoreceptors. And it turns out that they can use that for vision. They can orient towards objects of specific types. So, they really can integrate all that information into some kind of low resolution, rather simple, but still proper vision. The brittle stars do exactly the same thing. They can orient towards objects using dispersed photoreceptors. So, you could say that these are animals that have no eyes, but they still have vision. The chitins have also dispersed photoreceptors on top of their plates, and these are most likely used for them to, not just for shadow responses, which is clearly one thing, but they may also be able to orient with these photoreceptors. There's also a nice little photo of a jellyfish larva here with a number of small photoreceptors in the ring around it. So, with all this diversity of eyes and photoreceptors, we have to ask what, in order to understand how all this diversity evolved, we have to understand or see what all these different types of eyes and photoreceptors may have in common. In fact, they don't have an awful lot in common at all, except for one really important thing, and that is the light detection through opsin proteins. So, G-protein-coupled receptor that binds, which you all know, of course, that binds a chromophore in form of a vitamin A derivative that can flip between two different conformational states. The molecule, the chromophore, can be reused by just flipping it back again. It can be used in definite numbers of times, which is a really good thing. So, the interesting thing, though, is that if we look at which animal groups across the animal kingdom that has opsins, it turns out that we find them in all bilaterian animals, in deuterostomes, where we belong, in lavatrocosaeons, where mollusks and anelids belong, in ectaesosaeons, where atheropods, velvetworms, et cetera, belong. We also find them in niderians, where jellyfish belong. So, all these groups have opsins, whereas one of the groups that first branched off the animal phylogenetic tree, the sponges, they don't have any opsins. So, apparently, opsins must have evolved after sponges branched off. But interestingly, there are many different or several different major opsin families, and these opsin families are shared by the bilaterians and niderians, which means that there must have been an early diversification of opsins. So, what did opsins evolve from? It turns out that the G-protein coupled receptors that most resembles opsins, that is, in fact, the melatonin receptor, which you all know is used for detecting melatonin, which is a sleep hormone. Interestingly, melatonin receptor could actually, together with melatonin, could work as a light receptor, because melatonin is irreversibly degraded in light. It oxidizes, and it's really hard to get it back into a useful form, which means that animals have to synthesize new melatonin all the time, and when you shine light on it, it is consumed. It doesn't really do that much in big animals with large dark bodies. Melatonin degradation by light of melatonin is not a major problem. But in small, early animals that had transparent bodies, that certainly was an important thing, and possibly if they had continuous synthesis of melatonin, it would break down during the days that levels would be low, but it would build up at night so levels would be high. So that could actually work as a light detector. Although the opsins are way better because you can reuse the same chromophore over and over again. You don't need to synthesize new chromophore all the time. Sensitivity is most likely also much better with opsins than it was with melatonin receptors and melatonin. So if we now take a look at a complete phylogenetic tree of all animals and which has a time scale to it, this is something that other people have worked out, the timing of divergence of different animal groups. Now we can see where, when sponges broke off from the rest and we can actually time the evolution or the origin of the first opsin. And it turns out that that would have occurred something around 800 million years ago. There are different timed phylogenies that have slightly different time scales. So some other, in some other versions that this would end up at 700 million years, in this case 800 million years, roughly at least in that kind of ballpark. Interestingly, the diversification of opsins, that early diversification into several major classes happened really shortly thereafter. So that was also a very, very long time ago. We will return to this phylogenetic tree a bit later when we have looked a bit more at eye evolution. But we'll first try to just ask the question, what could a photoreceptor that has opsin expressed in its membrane be doing? If you just express opsin in a rather unmodified, unspecialized membrane in a receptor neuron, what could such a little cell do? If there is in an early animal that has not much pigment, not much structure, which is largely transparent, it means that such a photoreceptor would be non-directional. It would pick up light from all directions. So why would anyone want to pick up light from all directions? The only thing you can basically do is to track the daily light cycle of ambient light. You can tell day from night. You can possibly also tell full moon nights from new moon light. So you can follow the lunar cycle. You can follow yearly cycles and things like that. Not an awful lot more. You can actually detect shadows, I guess. But some of the things, yeah, you can use it as a shadow detector. You could use it as for UV warning to tell you that there is too much ultraviolet around and you should be moving to a darker place. Although you don't find the direction to the darker place with just a non-directional photoreceptor. If you live in the sea in water, then you can of course go to deeper water. Just go deeper in the water and you'll get away from high levels of ultraviolet. You could also use it as a depth gauge or as a surface detector. If you're a burrowing animal, a non-directional photoreceptor will tell you when you break through the surface into the open. There are still a few animals that have non-directional photoreceptors. That's their only type of photoreceptor still today. One example are the larvae of sea urchins. They've got no screening pigment or anything like that. They have options that only serve the purpose of non-directional photoreception for setting their biological clock. If you want to do anything more sensible, anything more advanced with a photoreceptor, it will have to become directional by the addition of screening pigment. It's a fairly small thing to do. You just have to express some kind of dark material close to the photoreceptor in its own cell or in another cell to make it directional. And suddenly you can do an awful lot of new and much more interesting things like orientation in the environment. The animal will be able to orient or direct its movement in the environment and to find the correct habitat, move away from unfavorable places and move towards favorable places. So this kind of photo-taxis could put them in brighter places, in darker places or just find the right type of intensities where conditions are the best for the animal. Animals that have directional photoreceptors are nothing and of course also non-directional ones, but animals that have evolved directional photoreceptors are a number of animal phyla. We looked at the jellyfish larvae before. Flatworms, there are a number of different types of flatworms that only have directional photoreceptors in their front end. So that kind of the beginning of lateral eyes, although it's not proper eyes, it's just a single photoreceptor with pigment. You find similar things in the larvae of a number of different invertebrate phyla that they have a pair of directional photoreceptors and they usually spin when they swim so they scan the environment with ease. That's the interesting thing with directional photoreceptors. You actually basically have to move in order to provide any information about the differences in intensity in the environment. To improve on this design, you would have to have more of these pointing in different directions, which is actually inventing spatial vision and actually inventing vision at all because directional photoreception is photoreception, not vision, but as soon as you have many of them pointing in different directions and you can discriminate between intensities in different directions at the same time, then you've got vision. In principle, by definition, I guess two pixels is enough, but of course you can't do an awful lot with two pixels. Most animals have far more than that. There are different ways of doing it. You could just have a dispersed visual system as we've seen before with a number of photoreceptors with screening pigment pointing in different directions. But you could also have a lump of them that gather inside a common pigment cup and you would get a cup eye or you could have an evaluated or a bulging out array of photoreceptors with screening pigment in between and you would have the first kind of compound eye. These things can of course also be used for orientation but much more efficient orientation because this is spatial vision and you can orient in relation to stationary structures, not just the light intensity, but you can find spatial structures and orient in relation to them. You can move in straight lines, measure your self-motion because you've got optic flow and you can do an awful lot of interesting things. Now we put these things in the bottom of the panel just to make space for more evolution on top here and if you want to improve on this system then something we very often see in real animals that have these low resolution eyes that are just used for orientation that don't resolve well enough to see other animals, that only resolve well enough to see the inanimate world and spatial structures around them that are not alive. Many of these actually have little lenses or lens-like things on top of the photoreceptors. In a cup eye could be something that very much looks like a lens and actually has some focus in properties as well. In compound eyes you'll find similar things, little clear plugs filling each of the little omatidia. The reason for all these is not actually to produce a sharp image and see more spatial detail. The main reasons would be instead to provide UV protection to put a filter on top of the photoreceptor to remove harmful short wavelength rays. Another one is of course mechanical protection because if you have little pits and things you leave it open for smaller organisms to move in there which wouldn't help very much at all. So to plug them up with some transparent material is probably a useful thing to do. Another thing that is useful is that they can provide better sensitivity by concentrating light. Not to provide a good sharp image but by just concentrating light a bit sensitivity will increase by adding a lens like that. Animals that have these kind of lower resolution eyes are found in a large number of different phyla including flatworms, ragworms. We've seen a kind of derms before and a number of other groups that have paired cephalic eyes with some kind of lower resolution. They use this for habitat selection of course for controlling body posture also for course control and navigation. And there is another thing that is probably not really fully appreciated and that is that this kind of lower resolution vision can also help to set behavioral states. It will tell the animal what kind of conditions are prevailing at the moment and these conditions will tell the animal what to do because under certain conditions the animal should try to look for food under other conditions. It should look for mates under other conditions again it should just rest and hide away to avoid predation or something like that. So different conditions would determine what behaviors are the most useful to engage in at the moment and these kinds of setting of behavioral states is certainly something that vision is involved in and that lower resolution vision certainly must have been involved in as well or is involved in. Now this is an example of an animal that has lower resolution vision. These tiny eyes of a velvet worm you see the eyes here are not good enough to see other velvet worms before they actually bump into one another. They can't see their prey even though they are predators but they use olfaction and tactile senses to find their prey and this is what they would see this is their natural habitat that's what we would see in a natural habitat where velvet worms live they hide under logs and things like that during the day to avoid being dried out in the midday sun. At night or in dusk and dawn they move out away from the hiding and go hunting for prey. As soon as the light intensity starts to increase too much they will have to find a way back to some kind of shelter and this is what the velvet worm would see we've modelled it using detailed modelling of the optics of the eye and it's clear that they can actually see the structures that they orient to so they can use vision for orientation. So can box jellyfish which is another group of animals that have lower resolution eyes they have a lens which is almost butting or approaching the photoreceptors and there really is no space between the lens and the photoreceptors so the eye is not sharply focused. Now you can see what happens here we've got an animated rollercoaster right that's perhaps not the most natural habitat for a box jellyfish but anyway if they'd gone on a rollercoaster ride the upper panel here shows what they would have seen it's actually modelled also with the proper time constants of box jellyfish vision. I'm not quite sure that they would enjoy their ride but they would certainly see things that happen and for real box jellyfish they can use this vision to avoid swimming into obstacles and they can also use it to maintain their position in the right part of a habitat. If they are washed out to an appropriate or a bad habitat they can use vision to find their way back so I think we should stop that before we get motion sickness so this is low resolution vision which is used for orientation in the habitat. How can this be improved? It can be improved obviously by introducing high resolution vision by focusing the optics by making as sharp an image as is ever possible this also involves actually getting much larger eyes which will come back to that later why they need to be much larger but if you have high resolution eyes then you can see other animals you can interact with other animals visually and if you do that you actually have to see the other animals as an object and separate them from the background. So object vision of course requires that you can separate objects from backgrounds it also requires that you can identify the objects because you can't behave in the same way to all types of objects so you need to be able to identify objects and categorize them and you also need to ignore objects that you're not actually interacting with so you need some mechanisms of attention as well so there is a lot of processing that has to evolve in order to go from low resolution spatial vision to high resolution spatial vision to go from orientation to object vision animals that have done it are actually just three major groups we know it's been done by vertebrates they have invented high resolution vision the atheropods have done it actually multiple times and cephalopods have done it so octopus and squid these are the three groups that have invented high resolution vision and actually can interact visually with other animals with conspecifics with prey with predators and of course also handle and manipulate other objects that are not necessarily alive but may be food or something like that all these things require that the basic task that animals has to solve is to discriminate intensities either in space or in time and the accuracy by which you can do that is determined by the random arrival of photons so if you want to have high accuracy if you want to determine the intensity with high accuracy such that you can discriminate between very small intensity differences then you need to collect many photons the accuracy is easily computed simply as the square root of the mean number of countered photons if you count 10 photons then the accuracy is 30 percent well actually yeah almost 30 percent the square root of 9 is 3 so roughly 30 percent accuracy is what you get if you collect 10 photons per sample if you collect 100 photons per sample your accuracy increases to 10 percent if you collect a thousand photons you down to 3 percent 10,000 photons you can discriminate small differences in only 1 percent of intensity that means if you really need to discriminate small differences you also need to sample very many photons in your samples so let's see what this actually means in terms of sensitivity if we go from non-directional photoreception here to directional photoreception here to lower resolution vision to higher resolution vision we know that the what you have to do is that initially you can have really slow photoreceptors they have to go faster and faster the things that they are interested in are contrasts and to discriminate between day and night which is a huge intensity difference you don't need to be very see very small contrasts but when you go into directional photoreception lower resolution and higher resolution vision you need to be able to see smaller and that means if we look at the white curve here that the sensitivity would actually drop from non-directional photoreception to higher resolution vision by now we've got a log scale here so by roughly two orders of magnitude by roughly 100 times so there will be 100 times less sensitive just because the contrast that need to be detected are so much smaller the solid angle goes of course from 360 degrees to something like 180 degrees to maybe 20 degrees down to a few degrees or less in this process and if you cut down the angular sensitivity that will mean that you absorb much less light and sensitivity drops by more than four orders of magnitude same thing with speed you can start with slow photoreceptors but as you evolve directional photoreception, lower resolution vision and higher resolution vision photoreceptors need to become faster and faster which means you lose sensitivity also again almost about four orders of magnitude so if we combine all these three effects we have to multiply them but since we are on a logarithmic scale we can add them instead we should add them instead and then it turns out that the dropping sensitivity is a daunting 10 orders of magnitude that's 10 billion times less sensitive because contrast or the need for small high contrast detection the small angles, the high speed all those things together make the photoreceptors so much less sensitive so we basically need to do something about this if we again take a look at a graph where we've got light intensity in a logarithmic scale on the y-axis and here I've plotted how light intensity changes over the day from sunlight down to moonlight and starlight where we've got a difference of about eight orders of magnitude only six log units down to moonlight but then of course there is a range within each scene one and a half to two orders of magnitude that's the width range of intensities within a single scene and that whole range will then slide up and down as the day turns into night there may also be other weather changes for example overcast will take away one one and a half or two orders of magnitude of light intensity if you're in water then 10 meters depth in coastal water will remove about two orders of magnitude so all these things will have to be taken into account now if we take an unspecialised cell and put as much in as we can in the membrane we can't put too much ops into it because the cell has to live as well so there needs to be space for all the other trafficking proteins for the cell metabolism and so on so we can't put too much ops in it but if we pack in as much as we can then the cell will be sensitive in order to do non-directional photoreception down to moonlight intensity so that's fine you don't really need anything to do non-directional photoreception then a non-specialised cell membrane is fine but if you want to do directional photoreception because you have cut down on the angular sensitivity you've shortened the integration time and you also need to see much smaller intensity differences then it turns out it'll work down to mid dusk it won't work at lower intensities but if it will work at mid dusk then give it some allowance for overcast conditions and some depth in the sea and you will easily end up with a sensitivity that requires more than daylight so you really need to do something to fix the sensitivity of a photoreceptor to make it work as a directional photoreceptor and the obvious thing is to get more ops in into the membrane which you can do by devoting a specific domain of membrane to contain ops in then you can pack ops in way more densely and another thing you can do is you can increase the volume of that membrane by stacking it so you can make huge stacks of membrane that contains much more densely packed ops in and then you can absorb a very large fraction of the light that actually helps, that rescues this problem and allows directional photoreceptors to work as long as they can stack membrane. Stacking means using cilia or microvilli so that this is why Ravdomiric and cilia photoreceptors evolved now looking at low resolution vision without any without any specializations at all a photoreceptor would not work at all for low resolution vision we definitely need the stacked membranes and I need to be stacked even more but that isn't quite enough because that will take us down to mid dusk and as we said before a bad day with lots of heavy overcast and some depth in the sea and this limit will be very close to as bright as it is during the day and it wouldn't work at all into dusk and dawn but then again we can rescue this by adding lenses not lenses that make a nice and focused image but lenses that increase focus like just a bit such that the intensity such that the sensitivity can be brought up and that is also the reason why we see lenses but not focused systems in low resolution vision systems going from there to high resolution vision clearly with unspecialized receptor cell we wouldn't see at all there would be many log units to low sensitivity we need to stack membranes and we need to engage lenses but the lenses here is not actually to introduce the lenses because we've already done that before so we can't introduce the lenses again the trick has been used the trick that now has to be used instead is to increase the size of the lenses to put them to put the retina in the focal plane of the lenses and to increase the size of the lenses so this whole process here actually implies that the eyes will have to grow in size massively to allow higher resolution vision compared to lower resolution vision and if we now try to summarize the key innovations that must have been going on for eyes to evolve the first thing would be starting with melatonin as a light receptor which was greatly improved by changing into opsins instead transcription factors would have been introduced here so if you want to look back six and things were introduced that's likely here at a very early stage because you need something to determine where opsins should be expressed next screening pigment must have been introduced in order for loader non-directional light sensitivity to turn into directional light sensitivity and that's also why you have to introduce membrane stackings in forms of rabdomeric or ciliary photoreceptors after that multiple pixels and motion detection would take you into lower resolution vision where focusing lenses are needed to make the system better in order to increase the sensitivity enough from there just object segmentation and identification and attention that's processing innovations were necessary to go from lower resolution vision to higher resolution vision and also a massive increase in size and it's interesting because this massive increase in size really happened we can see the first animals that had proper eyes were found about 540 million years ago during the Cambrian explosion that's when animals, the large macroscopic animals evolved so it's not at all unlikely that the evolution of high resolution vision was an important ingredient in the reasons why or the reasons for the Cambrian explosion because it wasn't just the eyes it would also have to involve if you want to carry large eyes you have to have a large and mobile body in order to be able to carry the eyes in order to make some actions the way the high resolution makes sense it would also need to evolve a much larger brain in order to process all that information if we look at take back this timed phylogenetic tree we'll find that 540 million years ago that was here which means that from the first absence to the high resolution vision that all happened long long time ago most of the time has been going on really nothing new has happened so nearly all of basically all of our evolution happened between 800 million years ago and 540 million years ago then we've just lived with what evolved at that time and of course spatial vision has re-evolved a couple of times during that time then if we go back to this little scheme we can actually see how many times different things have evolved Opsin evolved from melatonin probably only once everything points to that that happened only once but non-directional light sensitivity turned into directional light sensitivity at least 13 times independently and directional light sensitivity turned into low resolution vision at least 15 times independently low resolution vision turned into high resolution vision at least 10 times independently so our evolution has been a massively parallel process except for the very first thing the origin of the first Opsin that happened once so if we summarize all these things we start with non-directional light sensitivity which evolves into directional light sensitivity then low resolution vision which we can if we like call ancient vision with all these things and high resolution vision is the end kind of end product with which requires object vision we can actually call this object vision we don't want to use low and high resolution because it is kind of low resolution for low resolution tasks as well so that's why it's better to call it ancient vision and object vision anyway all these things evolved because it was driven by the evolution of new behaviors so non-directional light sensitivity was evolved for tracking the daily light cycle and directional light sensitivity evolved to allow orientation to light low resolution vision evolved to allow orientation to stationary structures and high resolution vision evolved to allow interaction with other animals we can also plot it in this way where we have evolution of eyes on this axis and evolution of vision on that axis so we have basically non-directional photoreceptors directional photoreceptors low resolution eyes and high resolution eyes which and then the types of functions we get out of it non-directional photoreception directional photoreceptional high resolution vision on that axis we now look at how this may have evolved of course it all must start with non-directional photoreception for tracking the light cycle and these types of photoreceptors must have been duplicated and started to be used for other faster things like orientation to light possibly via active light responses such as depth control or shadow detection once orientation to light had evolved that could easily be improved by evolving low resolution vision but as soon as low resolution vision evolves the orientation to light with phototaxis and posture control with just a directional photoreceptor that is become superfluous because low resolution vision can do that much better so there is no continuation here although non-directional photoreception continues to be important and is always used by all animals even though more advanced things have evolved later and then of course when high resolution vision evolves it doesn't mean that low resolution vision tasks or the orientation tasks do not become superfluous they still are as important and possibly even more important than they were before because the animals engage in much more complex behaviors now this is the regular type of eye evolution scheme but there are some shortcuts some really weird animals that have done weird things and this is one example and that is the fannworms fannworms are polychaete worms their relatives have paired cephalic eyes which provide low resolution vision but in fannworms those cephalic eyes have been reduced and they have evolved new eyes on tentacles the purpose of the new eyes are quite clearly to detect predators which is according to the definitions I've introduced before here would be classified as high resolution vision that is seeing other animals not just inanimate structures in the world so that has evolved but it hasn't evolved from low resolution vision because the low resolution vision eyes have been reduced and instead it is more likely to be skin photoreception which is non-directional that has turned into directional photoreceptors that are used as alarm photoreceptors so this is a kind of shortcut we find another one in bivalves and even in chitins it is not clear what bivalves, clams and things what the common ancestor with them and other mollusks looked like if they actually had low resolution vision or if they only had directional photoreceptors but it doesn't really matter also in this case active vision for seeing other animals, detecting other animals and protecting themselves like closing the shell has evolved most likely from skin photoreceptors from non-directional photoreceptors the interesting thing here is that some of these have evolved further and especially in scallops and some other clams as well, they are mobile animals and they actually can move around in their habitat and they need some way because they have lost their original cephalic lateral eyes they need some means to evaluate their habitat and find a suitable direction to swim in or crawl in so they have re-evolved they have evolved low resolution vision from most likely from alarm photoreceptors that are evolved to detect predators although if we take a look at the normal scheme here and take a look more at these things that we have at low resolution vision and high resolution vision we can actually use that to divide vision into four quadrants so we'll skip that and look here the first division that is the division between low resolution vision let's call that ancient vision then and high resolution vision or object vision so that's the major division one is earlier than the other one has evolved from the other but there is also an important distinction between active vision and what I mean by active vision is the closed feedback loop where animals see and act so where the input from vision immediately enables the motor output because there is the opposite to that passive vision the kind of assessment of the environment which doesn't necessarily need to any immediate behavior at that time but it may set the animal's behavioral state it will tell the animal what kind of conditions there are at the moment and it will also then set the animal's behavioral state such that they engage in particular behaviors but not in others but in other conditions they will change their behavior repertoire and that can be done by low resolution vision so that would be setting behavioral states can certainly be done already at a low resolution visual state but it can also happen with object vision the number of different objects it may be if you see lots of predators that will tell animals not to do certain things but it should have lots of specifics that animals will engage in other types of behaviors so this will also definitely control animals' behavioral states and determine which kind of behaviors animals should engage in so passive vision we could see as some kind of overarching behavioral control telling animals what is useful to do at the moment now interestingly visual neurobiologists like many of you have been talking to this talk visual neurobiologists have studied visually guided orientation and visually guided interaction with objects both of these things are well studied visual psychologists have studied visual perception which is another word for a passive vision passive object vision but basically no one has been studying low resolution vision low resolution passive vision which sets most likely sets behavioral states possibly if you wake up on a cloudy day or an overcast day or on a sunny day you feel like you want to do different kinds of things which I guess is an effect of really ancient systems that are working on passive vision on a very old part of your visual system that's all thank you thanks Alain for that thank you a lot I would like to remind the audience that if they want to join us and ask their questions themselves they can join us on the link I just joined a chat so please do join us I have one question from actually it's obviously an evolutionary question from Tambaden why do you first have all this stacked obscenestive membranes before some sort of lens will it also work the other way around in principle it would work the other way around I think the most obvious thing to evolve first though is stacked membranes because the amount of light you can absorb in a single unspecialised membrane is about 0.002% of the incident light so if you want to then of course you also have it distributed over the entire cell so it's harder actually to introduce a lens before you have concentrated the light gathering area to a small thing which is the stacked membrane thanks for that I would say following your last slide what do you reckon the field of research should put its effort on in the future why should we investigate I'm really keen on working out low resolution passive task I mean everyone knows what activation is but if there is activation there is also a passive vision I haven't found a better word for it but I think you need you understand what I mean with passive vision and low resolution passive vision is certainly not studied but visual psychologists have extensively studied perception which essentially is passive high resolution vision but there is not much study at all on how animals behavioural states are set by the general light environment but I'm sure that that is important and animals will have to in order to be able to move to different parts of the habitat animals will also have to assess the habitat and work out whether it is good or bad that they need to move to another place and they certainly need to work out ways of doing the right thing at the right time and vision is a really good cue for finding out the current conditions that tells you what is the most sensible thing to do at the moment there is lots done actually in drosophila and other animals in the central brain there is lots of release of neurotransmitters which I guess could be the mechanism by which low resolution passive vision acts but the connection from actual visual input to release of neuromodulators to effect on behaviours that's basically not studied at all I think that's an interesting part of the visual system that requires way more investigation I have a question from Michael proux does behavioral flexibility interact with the active passive vision division it sounded like active corresponded to automatic but I I might have missed the details depends on what you mean with automatic yes in a sense I guess vision is automatic new input comes into the brain the brain modifies the current behaviours and updates movement vectors and updates whatever is happening and continuously take updates the decision of what is going on and guides the behaviour you could call that automatic if you like I've forgotten the names of these because there has been a really well known theory of where vision has been divided into I think based on patients with brain damage that where they've found that people can be blind or say that they're blind but they still have behaviours that show that they are not it seems that the perception part the passive part is the conscious part whereas the closed loop need not actually be conscious we just ride along and use the passive part to see what's happening that's of course things we can investigate in humans only otherwise we have no ideas if anything is conscious or not conscious in other animals you cannot I hope Michael doesn't answer your question I will move with something from Karola do we know the transmittance cutoff for the most primitive lenses we've measured them a bit and often they are slightly yellow they often remove ultraviolet and I have a follow up from Tom Baden at which point does wavelength discrimination enter and how many times expect this kind of question I think yes in non-directional photoreception wavelength discrimination is actually useful because you can tell different times of day apart from different weather conditions and different depths in the sea if you measure two different wavelengths then you can know what is actually time of day and what is depth in the sea and what is weather condition change orientation to light maybe but I don't know of any system that actually uses spectral discrimination at all for photo-taxis with directional photoreceptors low resolution vision as far as is known most cases are actually just single opsin based but that could be because we haven't looked closely enough it's really for high resolution vision that colors become important for cephalopods it seems not they've gone into polarization vision instead but arthropods and vertebrates certainly have gone for spectral discrimination really early on but I wouldn't be surprised if we find more low resolution vision systems that actually uses more than one opsin class or type of receptor because it would provide them with better information about the conditions that are prevailing if you could also detect spectral differences I see that Tom Baden is with us do you want to follow up with chromatic vision and the evolution of wavelength discrimination I think we can but maybe let's talk a little bit more formally I will follow up with a question from Angera first he's saying amazing seminar it's a privilege to listen to all this information compiled in one talk about the last point there is growing evidence of mood modulation by melanopsin in humans does that count I'm not sure what you're referring to but does that count mood modulation by melanopsin pardon but I didn't understand the question another way to be sure is asking is there any I mean there is growing evidence of mood modulation by melanopsin in humans does that count relative to what can you elaborate okay I think that's pretty it I don't see any more question when we're waiting for Angera to elaborate sorry it's not a big crowd today I guess a lot of people will watch in podcasts oh yeah so does that count visual state with low resolution well melanopsin the melanopsin function in humans with our visual state with our light sensitive renal ganglion cells of course I mean that's a rabbidomeric photoreceptor yeah it has other roles of course then but then again vertebrates are pretty special you don't find anything similar in other any other animal group it's vertebrates that have done this amazing thing that connecting or basically seeing with the ciliary photoreceptor which is presynaptic to a rabbidomeric photoreceptor that's a vertebrate thing to do that it has some kind of interesting evolutionary background which we don't yet understand but it seems that no other animal group has made anything similar that's probably not an answer but it's a statement you see that was about to say something similar so I just want to remind people that either want to join us as I can do now have a couple more questions then we will terminate the stream online so if you want to join us for any formal session please do that now I have one question from Henri Grinside it's pretty general if you want to answer it is anything known about the order of evolution the eyes change and then the nervous system evolved to take advantage of the increased resolution or sensitivity or do they co-evolve in some ways definitely Carol because the driving force has been evolution of better behaviours so that puts the pressure on function and function requires both processing and better to better sensitivity and so on so you have to have all these things at the same time you can't just do one thing and wait for the other I don't think I've seen any more questions so I think we're done for today sorry that was very instructive just want to pass a message that we're not going to host any talk next week we're taking the week off and I guess we will see you all in two weeks thank you again and see you all very soon alrighty I guess we're still live are we?