 Είμαστε ζωοδημένοι και νομίζω ότι είμαστε εξαιρετικές. Γεια σας, καλώς και καλώς στον άλλο σημερινό της Σεμίναρς Σεμίναρς, όπως πάνω στην πρόσφυγη της ΕΕΕ. Είμαι ο Γεωργίας Καφετζής, ένας πρόσφυγος της Σεμπάνδας Σόλλερς, και τώρα είμαι πρόσφυγης της Σεμίναρς Σεμίναρς. Και αυτός είναι το χώρο για σήμερα. Πρόσφυγης της Σεμίναρς, εγώ θα πω πάνω στον Βόγγελς και τον Πάνος Βοζέλος για να πω πρόσφυγη της Σεμίναρς, ένας πρόσφυγης της Σεμίναρς Σεμίναρς. Είμαστε εξαιρετικές. Προσφυγηθώ να πάμε στον σημερινό που είμαστε εδώ για σήμερα. Και να παρακολουθώ το σημερινό μας από ΗΕΕ. Δοκτρο Ζόλιον Τροσιαγκό. Είμαστε εξαιρετικές στην πρόσφυγη της Σεμίναρς, Είμαστε εξαιρετικές της Σεμίναρς, από ΗΕΕ. Τζόλιον πήρε στον ΗΕΕ Μπερμηχάμ και είχε συμπεριβάσει από τον Ζάκη Τσάπππελ, που τον έγινε στο 2012. Τις ΠΑΙΤΟΥΡΙ από το Τουλείο εκκολογικού και μορφολογικού ασπίτι σε νέα Καλεδονία κρόσ. Στον Πρόσφυγης, πρώτον κέμπριζ, και στο λαμπό του Μαρτιν Στιβμίν και Σπότισβούν, πριν έρχονται στο Εξέντερ, στο 2013, όπου έχει been located ever since and soon starting as a lecturer. Στον Τουλείο εκκολογικούς, πώς εγώ και μορφολογικούς θα μπορούν να εμπληκθεί το Τουλείο εκκολογικού για τον Τουλείο, και ο τελευταίος εξέντερσης πώς καμουφλάζονται και να μιλήσουν μεταλλογικά στις στρατηγές, να εξεχθεί το ζωήμα στον πρόσφυγη, Τζόλιον πρέπει να εμπληκθεί ε столько αγάπη πρόσφυγη clickingrow these years methods for repurposing commercial equipment like a typical camera into multispectral, scientifically Aaron appropriate imaging tools. Today's topic is centered on theoretical unmodeling aspects of color appearance and will have the pleasure of hearing about THEIR LATEST and I'm sure exciting work together with Daniel Lausaurio. So without any further ado from my side please all welcome Dr.Scianco. Ευχαριστούμε, ευχαριστούμε πολύ και ευχαριστούμε για να είμαι μου. Απλώς ευχαριστούμε για να είμαι εδώ και μυαλό να είμαι ένα βιζιολοκολογιστ, να μιλήσω σε ένα σημαντικό κομμάτι. Εγώ θα ξεκινήσω να σημανωθώ. Και Γεωργία, αν μπορείτε να μου δείξετε αν βρίσκεται. Ναι, θα δούμε. Επίσης, επίσης. Λοιπόν, ναι, είμαι ένα βιζιολοκολογιστ. Δεν έχω εξοδοσιασία για τον πρόκλημα των εξοδοσιασίων, της Παραγώδας και των εξοδοσιασίων, των εξοδοσιασίων, της πρόσφυρας, της υποδοσιασίας, της εξοδοσιασίας. Δεν έχω εξοδοσιασία, τη διάβοση. Είναι, πάνω από τις περιοχές που βρίσκω, σχετικά πραγματικά, τα επαγγεία. Η επαγγεία είναι ένα πολύ σημαντικό κομμάτι, της ευρωπαϊκής οικολογίας. Είναι η πιο κοινότητα πραγματική προσπαθήτητα της στις δυνατίας. Προσπαθήτητας είναι πολύ κοινότητα και η καλύτερη way να προσπαθείς τον εσύ είναι να μην χαμηθεί. Εδώ είναι η εξοπλή εξοπλή εξοπλή της νεκτζάρς. Είδαμε πολλά δυνατότητα στο νεκτζάρς στην Αφρική. Και η καμυφλήση είναι under such intense selection pressure. Ήταν πραγματικά ενδιακή για να δούμε από ένας εξοπλήτητας. Επιστεύω ότι ο καμαφλάσματος είναι ποντατική η λόγια της πλήκου και για το να κατασχολεί το πράγμα του. Αυτό είναι πολύ κλείδο και για τον φορφόμο της λόγας είναι κοινωνία που συνεβάσουμε. Μετά από την α仁τερία, θα έρθω στη κρίση του, δηλαδή για να βοηθήσω έλεγε ή να πηγαίνω πρόσπισης για τη δημοσία. προσέβουν πρόκεισης ή σηκω��τος εξόπροπου. So I'm going to start by giving a very brief overview of some of the important hypotheses I test, which will feed into why the visual modelling that I'm going to present later is particularly useful for me. In this work, this was trying to understand why dichromacy is so common in so many animals. Τα κυριακή γυρνή λινότητα είναι πολύ σκέφτη, είναι σκέφτη χωρίς ότι δεν είναι από την αξιότητα που πρέπει να πω. Και σε πολλές άλλες πρόγοιμοι, όλες οι μέλες είναι δικρομπέσεις. Άρα, υπάρχει το αγγημό ότι πρέπει να υπάρχει ένα τελευταίου αδυναμό για να είμαι δικρομπέσεις. Και μια από τα ιδέα είναι ότι η δικρομπέσεις δικρομπέσεις δεν είναι χαρδίκες για να καταστρέψουν το καμυσκόνι. Επίσης είχα κάποια διεθνότητα που προσπαθήσαμε να δούμε αυτή. Επίσης είχα αντιμετωπίσεις να βρεις πίτσεις της καμυφλάσσης της νερδιάς όπως μπορούν να υποστηρίζουν τη δεύτερη ή τρικοματική βιζινότητα. Έτσι, είναι τα δύο εμπειρίες που βλέπεις εδώ. Μπορείτε να δείξετε τον καμυφλάσσιο μου, όμως, όχι όμως, είναι πραγματικό να δείξετε, δεύτερα. Δεν ξέρω ότι είναι ένας καμυφλάσσας, που μπορείτε να δείξετε πραγματικό σαν πιστή, αλλά μπορείτε να δείξετε. Στο δόντυπο εδώ, έχετε το νερδιά με λευκό, και πριν από το ρεύκο, μόνο με λευκό μήνες. Επίσης, βάσαμε πολύ άλλο πίσω της διεθνότητας. Πιστεύω ότι η λευκό μήνες διεθνότητας κάνει πολύ σύντομα. Είναι η σύντομα του μύθο, λόγω το λευκό, όπου η θεωρία του διεθνότητας. Ε住τε ένα τόσο απάνωτικο εσο découvρισμό伸 να κνα ΔηλαδήστORTER και όπως λεϊκή αναπαμενή και μ tastο Ξεκτικά scripture Fire and Moonaland How it's one species. They can all look incredibly different from each other. And so understanding the sources of this diversity of appearance and nature is very interesting. requisite to find those people who try to approach the hunters in real life. And one of the key ideas here is that it's frequency dependent selection. It's the predators learning to Find the crabs, which drives diversity and appearance. So, what does that mean? For example, a bird or a fish trying to find a crab, might stumble across this mottled one at the bottom left and then it holds a search image in its head or mottled crabs and somehow it uses that Αυτή η στιγμή για ένα κραύμα φέρνει να βρεις ένα κραύμα, αλλά να βρεις άλλες μορφες. Λοιπόν, μπορούμε να δοκιμάσουμε θείριες like this. Αυτό λοιπόν lead to negative frequency-dependent selection. Υπάρχει να δοκιμάσεις διαφορετικότητα για όλους around you. Υπάρχει ένα σημαντικό μέθοδο που χρησιμοποιούμε για να δοκιμάσουμε θείριες like this. Είναι απλώς like online camouflage game. Ωραία, βέβαια, βόλια. Υπάρχει να δοκιμάσεις online games. Υπάρχει να δοκιμάσεις like this, Splash to find the crab as fast as they can. Άνας κράτων,ρούμε να δοκιμάσουμε οι κάποιες με τα δοκιμάσειςλοπούματα. Άνοιχε να δοκιμάσουμε ένα κραύμα. Αν θέλουμε να δοκιμάσουμε ένα κραύμα και να δούμε εξαιρέαμε συγκρατή. Αυτό σάφω το γραφό πόσο εύχο ότι ήταν να γράφουμε ένα μορφό κι ένα άλλο. Λοιπόν, το γράφι με το χρησιμοποιούμενο. Υπάρχει όλους τελικά κράμματα. Λοιπόν, Let's say you start by learning a green crab. Παρακολουθείτε να βοηθούσατε χρήματα σχεδένα, και πάνω εδώ, και θα δώσετε για το σύμ Rec, για το φончογραφείο. Η σχεδευτική χαρδία εδώ βήγεται από ένα πάνω, γιατί το επόμενοểm χρήμα χρήμα σε καλό βοησό, να φίρνει μία σπίτι μια καταφέρα. Θα εγώ, ανςεις στο πιιοθήτη της χρήμα εγώ, δηλαδή τον πιιοθήτηaren να φαράξεις και μία καταφέρα. Γιατί εγώ πει πραγματικά αυτοί. Αυτό λοιπόν δίνει αυτό το γιατήμα να βγει το καπαρφλόγι, but another aspect that came out from this research was that edge disruption was very important to how predators learn to find camouflage and the general importance of camouflage overall. How effective it could be. So what do I mean by edge disruption. Edge disruption refers to the concept that the main thing that gives away a target, whether to a predator for example, Οι πρόεδροι που εξοπηθούνται είναι η πιο σημαντική ασφαλή της αυτοκλήτης. Αυτό εδώ έχουμε δυο πρόεδρος, δυνατές, και η ασφαλήτηση εξοπηθούνται για την τεχνική, όπου χρησιμοποιούν σχέσης μετά τις ασφαλές σου, για να συμφωνεί με την ασφαλήτηση της ασφαλής. Στον δεύτερο, έχουμε πολύ ασφαλήτησης, όπου οι ασφαλές πηγαίνουν στις ασφαλές. Και δυνατόν, έχουμε ένα ασφαλήτηση, ένα ασφαλήτηση με κανένα ασφαλήτηση, ή κανένα ασφαλήτηση. Προσπαθείς, αυτό ήταν ένα πολύ δύσκολο... ήταν ένα δύσκολο πρόεδρο να εξοπηθεί, και πιο δύσκολο να μεταξύψει. Και we knew this was important, but there was no good way of doing it. So I developed a method that could try and get to this. And it's fairly simple. You use Gabor kernels. So these, many of you might be familiar with this. This is an oriented filter that detects edges essentially at different orientations. You run this around the edge of the target and see how strong the edges of the animal are, the true edge of the animal. And then you run it, the same filter, you flip 90 degrees and see how strong the edge is at 90 degrees. So you're at every point around the edge of the animal, you're measuring its actual edge intensity versus its fake edge intensity. And you just take a ratio between the two and you sum around it. And you end up with this, we've called it gabarat methodology. And actually it turns out to be one of the best predictors of camouflage, how effective camouflage is for humans doing camouflage tasks like this. It beat all sorts of much more sophisticated methods for trying to identify pattern matching and feature matching and so on. Another interesting thing though is, you'll see the little sigma three, that was actually at three cycles per degree. So this gabarat method worked most effectively when we used a Gabor kernel that was at three cycles per degree. And by that, I mean how many cycles you have per visual angle of degree. So if you have a very tiny filter, you'd have lots of fine stripes and a very large filter would be big stripes. So stripes where you have three stripes per degree were the best. And this kind of will come up later as being an important thing. But it also highlights the importance of spatial information in camouflage and visual ecology generally. And this is an area where the field has been very slow to kind of latch on to what we know must be true. But spatial information is of course super important with the way camouflage works and signaling. So if you take this incredibly toxic moth that's displaying its bright coloration patterns up close, these patterns are very salient, they're very easy to see. And so here we can simulate what it looks to it like to a human at two meters. But as you get further and further away, those colors blend together. And so at 32 meters, you can't see those stripes at all anymore and they blend together and become perfectly background matching. And this is a key strategy that you find again and again in nature. Something can be signaling up close but actually perfectly background matching but a slightly larger distance. And traditionally we've only looked at this kind of acuity limit. So this is kind of the state of the art at the moment. Very few people are considering anything more than just the visual acuity limiting the stripes you can see, for example. But it's very important. Another aspect of the visual modeling I use is understanding color detection and being able to discriminate colors between objects. So obviously that's important from a camouflage concept but also for a pollinator finding a flower, making sure that flower stands out clearly from the background. So in this research, I'm going to present it was using moths and flowers. So knock tunnel hawk moths are actually their trichromats with receptors sensitive to green, blue and ultraviolet ranges. But incredibly, they have very low light vision. So this color vision of theirs can work down to starlight levels of illumination. So it pretty much never gets dark enough for their color vision to switch off. When it gets past, you know, much past dusk for humans our color vision stops, we switch to rods, we don't see color. These guys are acting just like bees. They're using the color of the flowers to find the flowers at night as if nothing had changed. And they're very important pollinators. They're about as important as bees are for pollination networks. But we wanted to understand how artificial light types interfere with the ability of moths to see these flowers. So when it comes to modeling color discrimination the field is very dependent on models such as this. There are a few models, but this is one that we use a lot. It's the receptor noise limited model and it's incredibly useful because you don't need to know a huge amount about the animal visual system for it to work. So the model assumes that the thing that defines detection thresholds is mostly neural noise. This stops you being able to, for example, work out whether two colors are different or not. And you don't need to know a huge amount about the visual system just its spectral sensitivities and cone ratios and a bit of behavioral validation. But it has its limits like all models do but we use this model to simulate how these flowers might look to a hawk moth under different lights. So here's an example of a fox glove to human vision on the left and then to elephant hawk moth under different light types. Now these images all have the same white balance. So the different colors here are colors generated by these weird light sources, very spiky light sources in some cases. For this modeling, we didn't actually use photographs, we used spectra, but this gives a nice visually appealing idea of what's going on. So for this research, we simulated the ability of hawk moths to see flowers against their surrounds, so green surrounds, under different light types on the vertical axis here and different light intensities on the horizontal axis. So where you've got broad band white light, so daylight, metal halide and white LEDs, all these green boxes show that the moths would be able to see colors as well as they would be able to see them under moonlight. And then the black boxes show that there'd be inhibition of color vision. So where you have low pressure sodium and orange LEDs, very narrow band orange sources, blocks all color vision, as you'd expect. The really interesting finding though, came with these broad band amber light sources. So here, these are high pressure sodium, a very common light source and PC amber LEDs, which are held as a kind of more potentially more eco-friendly future light source. We found that you get a really interesting interaction between light intensity and ability to see the flowers. They're also varied by flower color. So here, purple and yellow flowers are split into different groups here. What this means is that a purple flower right under a street light would be incredibly easy for the moth to see. In fact, it might be easier than it would normally under moonlight. But then that same flower slightly further from that same street light, just with slightly lower intensity, that street light will actually block, it will inhibit the ability of the moth to see flowers, the flower color. And so this could have unknown consequences for their ability to polymer. So throughout the work I've shown so far, each study really depends on visual modeling, whether that's comparing color or brightness, pattern, edge disruption, all of these things. So hopefully this gives you an idea that effective visual modeling is really critical in our field for all sorts of reasons. But there's really quite a big elephant in the room, particularly when it comes to modeling brightness or luminance. So take this hypothetical example, which we might often encounter in my field, of which crab is brighter. So the average of these two crabs might, you know, the average reflectance might be identical, but the one with high contrast at the bottom to me looks slightly brighter. You know, so it's difficult to know at what scale. If you remember back from the spatial viewing distance related modeling, that can have a big influence. But we can make it even more complicated now by considering that the same crab might be on different backgrounds. And now, so what I've done here is take the top crab and duplicate it on a light on a dark background and the bottom crab again, duplicated. And to me, the one on the left looks brighter now than the one on the right. But this effect is more intense in the lower crab than the upper crab for me. And some of you will recognize this as simultaneous contrast. Yeah, the crab against the dark background is looking lighter and sure enough, we do find this effect. And people have tried to model the current sort of state of the art luminance modeling in non-humans is really actually quite crude and it doesn't perform very well in behavioral tasks. So really nice behavioral experiments on trigger fish that show that actually the current models don't really work very well and this simultaneous contrast effect, it's difficult to explain. So this is another version of that same crab simultaneous contrast effect. The gray bar in the middle is actually all one gray level. You'll find that impossible to believe, but this gray on the left is the same as that gray on the right if you were to actually measure it. And this gradient in the surround is what makes this effect. So many of you will already be sort of familiar with visual phenomena like that. And there's a huge plethora, a huge number of these visual phenomena which have been described in the human world. So those of you more familiar with the human vision modeling world will be familiar with a lot of these things. And so a lot of these phenomena have been explained by sort of very high level sort of functions like it depends on lighting and atmospherics or your 3D perception, but also there have been lots of attempts to make low level models which tend to explain some phenomena but it's rare that they're able to explain many and they tend to involve lots of feedback loops and not tend to be generalizable or neurologically very plausible. So slightly undaunted by the centuries of work on all of these phenomena, I just wanted to know how to model the brightness of the crap. So during the lockdown, I spent lots of time sitting down and doing lots of coding and thinking about how this might work. And these I'll just explain a few of these phenomena just to give a flavor of how weird and different they can be. So I've already introduced this simultaneous contrast one, this gradient, this gray block looks like a gradient against an inverse gradient in the background. So what that is showing is that if you get a gray and surround it by darker, you can make it look lighter. And if you make it, if you get the gray and surround it by light, you can make it look darker. But the white illusion down here with these high contrast stripes does exactly the opposite. So here, this gray on the left is identical to this gray on the right. But the one on the right looks far lighter. And in this case now, surrounding a gray by light makes it lighter and surrounding by dark makes it darker. So we've got the white illusion, assimilation kind of effect. But it's not just about changing the brightness, it can change contrast as well. So below the white illusion here, we've got a checkerboard target and this internal checkerboard is the same internal contrast. The gray levels are the same on the left as they are on the right, but surrounding it by a high contrast versus low contrast background that makes it the internal contrast appear to change. And that same phenomena holds true with orientation. So this target on the right here is the same contrast as the one in the middle and one at the bottom, but where the stripes go in opposite directions, it looks like much higher contrast. So there's a lot going on here to unpack and a lot to potentially take on board. Huge number of different phenomena, but basically there are all sorts of ways of messing with our vision. So where to start with this? Well, when modeling vision, I really wanted to consider contrast sensitivity functions. So these describe the ability of an animal to see sine waves of different contrasts of different spatial frequencies. And so we get these typical effects where humans, for example, are best at seeing contrasts, our highest contrast sensitivity is at about sort of four cycles per degree, which fits in, if you remember the gab route, that was most effective at four cycles per degree, which is nice. So this is a key thing. And in the world of animal visual modeling, we have almost never considered these. There was a nice paper on zebra stripes and lion's ability to see their stripes, but beyond that, contrast sensitivity functions largely been ignored. So an interesting thing, humans are able to see contrasts. So the difference between light and dark of about 200 to one. But when you go out and buy a TV these days, you'll want to buy a TV that has a dynamic range that's insanely high. So modern HDTVs have a contrast, a dynamic range of about 10,000 to one. So how can a system that has the highest sine wave sensitivity of 200 to one contrast actually add up to this enormous dynamic range that's so much higher? It's quite an interesting problem. But the plot thickens even more when you consider that neurons are incredibly noisy. So this work by Simon Laughlin showed that neural coding is limited by noise and the dynamic range or the range over which neurons work is matched to natural image statistics. But the bandwidth of these neurons is extremely low. So for example, neurons can only code for about 10 levels. So the highest they can encode for is about 10 times higher than the lowest they can encode for. I'm going to be using the word bandwidth quite a bit. And that is a difficult term because it's used in different contexts. But mostly in this talk by bandwidth, I mean neural bandwidth, the ability of a neuron to code for, when it's firing at its peak rate versus firing at its lowest rate, the range that it works in between that is its bandwidth. So if you look at the range where a neuron is operating linearly, that's working within its bandwidth. It must saturate out at some point and it must have a point below which it can't, it's a zero, it can't fire any slow. So how can neurons that can only encode 10 levels, a bandwidth of 10 give us a high dynamic range images of 10,000 to one? Why are we going out and buying these HDTVs? Big question there. Another important way of considering our bandwidth is the spatial bandwidth. So spatial images are normally broken down into different spatial components. So this work by David Field is important for showing how you can break down a natural image into lots of different scales. So on the left of the image here, we've got the Gabor filters again, at different orientations and at different sizes. And if you use these to break down an image, you get lots of information out and you break it down into different octaves, generally using an octave scale is sensible and breaking the image down into these different spatial components. And bandwidth is often used in the spatial context, but as I mentioned, I'm going to be using it mostly in the neural context. So that's some background. What effect, when I was starting out, what effect did I really want to actually explain? Of all those phenomena, they're not particularly useful quantitatively. I needed to get my teeth into it in effect that we could model nice and quantitatively. And the crispening effect stuck out to me as a really important one to start with. So the crispening effect, here we have a row of gray tiles and that same row is showing three times. So these rows are all identical. The only difference is the brightness of the background. The crispening effect refers to the brightness differences caused by the background. So if you look at the three tiles on the left with a dark background here, there's a very big difference in contrast. The difference between this tile and this tile, they seem to be very large contrast differences, whereas the same tiles against the lighter background, they seem to be very small contrast differences. The step in contrast is much smaller, the step in brightness. And you get the equal and opposite effect against the light side here. Now, another effect is the simultaneous contrast effect going on. So if you take this square, for example, in theory, this gray should be identical, but actually to me, this gray looks more like this gray. So we have a sort of diagonal line here of equal gray levels. So background changes the contrast, but it also changes the brightness. You get simultaneous contrast effects going on, but you can also get white illusion effects going on this screen as well. So here we've got white surrounded by black, and this black actually makes this white look slightly less bright than this white. So we've got simultaneous contrast assimilation and contrast induction or whatever you wanna call it, all going on in one phenomenon. So I thought this is a very useful phenomenon to start with. And thankfully, there are good behavioral data gathered by Whittle measuring this, this crispening illusion. So he laboriously got subjects to tweak these gray levels on a screen until the difference in gray level between all of these patches looked equal. So this is an equal contrast step color space that is made. And the results are shown on the graph on the right here. So as this is the luminance that on the X-axis, and the equal brightness contrast on the Y-axis. The really important thing here in this curve, you'll see there's an inflection point. The curve is steepest at the background gray level. So you're really good at discriminating gray levels near the background, and it tails off at the top and tails off at the bottom in this kind of interesting way. So here are the main concepts that I wanted to draw into this model. So we have the crispening effect as a quantitative thing. The concepts of contrast sensitivity, so we can't see certain contrasts. Then we've got the concept that neurons have a limited dynamic range and a limited bandwidth that they can code for, which ties into the dynamic range of the natural scene. But we should also throw in some assumptions here about natural scene statistics. So this neural dynamic range has evolved to match natural scenes, and so they should code natural scenes efficiently. By efficient, that means that the neurons should be firing. They shouldn't always be saturated, and they shouldn't always be sitting lined dormant. They need to be used across their dynamic range efficiently at different scales. And in pretty much all of these areas, I'm sort of wading out of my comfort zone. So thankfully, I've had a lot of help from collaborating with Daniel Osorio, who's been really vital in giving me some sanity checking and helping with the modeling. So what does the model do? Let's just run through briefly what it does. So you start with an input image that you just split into illuminance channels, or brightness channel, and color channels, opponent channels, red, green, and blue, yellow. Then you split that image into different spatial scales, and you can either do that as visual systems do, using center surround, so a difference of Gaussian kernel, or a Gabor, this is an oriented filter that I mentioned earlier. So center surround, you find a lot from the retina onwards. There's lots of cases where you have center surround imaging in the nervous system. What that means is you take this, this is called a kernel, and you convolve the image with this. You move this kernel around the image at different sizes, and you look at the center versus surround, and you can do that with oriented as well. For the chromatic side of things, there's less evidence of oriented chromatic filters, so we just kept center surround to those. Rather than a normal convolution, I found it was actually important to apply a Mickelson contrast convolution at each point. So that means at each point, you do a Mickelson contrast of the center versus surround when you're convolting the image. That seemed to be quite important. And the output is shown here. You just break the image down into lots of different scales, which hopefully many of you will be comfortable with. Then the model considers contrast sensitivity functions. So these describe, as I mentioned, the smallest detectable sine wave, different spatial frequencies, and you have these for luminance and also for the red, green, and blue-yellow channels. So here I'm showing the contrast now on a Mickelson contrast level, because that matches the way the modeling works, and apologies in this image for the moiré effects. So what you should see is big bands on the left, and these should get smaller and smaller and smaller on the right until you've got very fine spatial frequencies here. But you get all these moiré paired bands. So sorry for that. I've also flipped the axis because I've kept the axis here with contrast sensitivity function. So people are used to seeing contrast sensitivity functions as a kind of inverted U like this, and so I've just kept that principle by inverting the y-axis here. So the first thing the model does is throw out all of the information that is below this activation threshold. So the contrast sensitivity function tells you what sine waves will be invisible, and it just throws out all of those. That should be quite uncontentious, hopefully. But there must also be an upper saturation threshold above which you're unable to code for larger contrasts. So this is, to my knowledge, it's something that has not been actually considered in the visual modeling world at all. But it's very interesting. It must be true. So here we've got the output of a monkey retinal ganglion cell from Darrington-Malini. As you increase the contrast, you have this nice linear region where the neuron is firing faster and faster as you're increasing contrast. But then there must come a point at which they can't fire any faster. There's a physiological limit above which neurons simply can't fire any faster. And this must be true of pretty much all the other factors be true of pretty much all neurons. So this is an important assumption, and this is where the neural bandwidth comes into play. So this brings into the concept that neurons are firing, but they do not have an unlimited bandwidth. There's a point at which the contrast sensitivity function means that they're not firing anything at the bottom, and then there's a point at which they stop being able to fire at the top. So our model makes the assumption, the potentially large assumption, but we need to make this assumption, that the bandwidth of neurons at different spatial frequencies is the same. And if we make that assumption, then the bandwidth is uniform, then the dynamic range must scale as a function of activation threshold, the contrast sensitivity. So that's what this blue line shows here. If we have the activation threshold, the point below which you're not seeing any contrasts, then we'll also have the saturation threshold, that point at which the neurons can't fire any faster. And that's shown here. So another way of showing this is with the same image. If we assume that the bandwidth is four, so those neurons can code for any number between one and four times that number. Then we have a situation like this where the contrast sensitivity specifies the activation threshold, and the saturation threshold is four times that. So really interestingly for humans then, you have neurons at the peak of our spectral, of our spatial sensitivity. There's four cycles per degree where we're really sensitive. You have these neurons that are really ready to fire, they're just, you know, tiniest little bit of information, and bam, they will fire very quickly, but also therefore they will saturate very quickly as well. So they've got a limited dynamic range, small dynamic range, but high sensitivity. Whereas at the low and high spatial frequencies, we have a very low sensitivity, which gives this enormous dynamic range. And so this is really interesting, combining different dynamic ranges with the same bandwidth. Then the next thing the model does is whiten the image. So some of you might be familiar with that concept. When you break the natural scene, for example, down into spatial scales, you've thrown away the sub-threshold and saturated information. Then you whiten the image so that every band, every spatial band has the same amount of contrast going on. So we're whitening it, making equal energy in each spatial band. And the size of the black arrows here shows the size of the gain, if you need to do that. And all of these green bars have been stacked so that all of the sub-threshold information's been lost. So this is what the model's doing at each spatial scale. Then the model simply pulls all of these spatial bands back together again, and you get some pretty funky looking output like this. So this output, I should say, is not been calibrated to look sensible on monitors. This is a model of an internal representation in someone's brain. It's not meant to look nice, but actually you get a really cool image out of it. It's a cross between an impressionist painting and one of those high dynamic range images you see. So walking through this blue bell wood is really disappointing. When you're walking through the woods, this lovely blue bell display, you take a picture of it and you can't see the blue bells. They're so small. Whereas this model predicts that you've got this beautiful impressionist painting style, sea of blue, which is much more as I remember the scene looking. And it shows much higher details in high contrast parts of the image. You can see into all of the ivy, which is really black, you can see all the leaves. So great, quite interesting output. But there's a key unknown parameter in this model so far and that is that bandwidth. So in the image I showed a minute ago, I assumed that the bandwidth was four just to make graphing it look easy. But that's a pretty much unknown number. What we can do is use these crispening data. So what we did is take the Wittl's input image that he used and put all the gray levels that people came up with for equal gray distances between these gray levels. And we fitted the model to this image and varied the bandwidth of the neurons and simulated bandwidth. And you get really nice output that was able to match the Wittl's crispening data really well. So the green and the gray lines here show the two flavors of the model using either oriented or non-oriented. So the Gabor is oriented filters and the DOG is non-oriented filters. And you get a beautiful fit to the behavioral data. And importantly though, you can see what the number is that it gives you. And for the non-oriented filters, it predicts the bandwidth is 15. So really close to that sort of 10 range. So Simon Laughlin was doing it on flies and they had a range of about 10 in humans. It's probably about between 10 and 20 for retinal ganglion cells. So that number is really, it seems to make sense. And interestingly for the oriented filters, we test it using four orientations and you're getting almost exactly a quarter of that 15 number. So 3.75 was the number. So these numbers that it's spitting out seem quite plausible in terms of neural dynamics. But we can actually test that more explicitly. So here we can go back to the Darrington and Lenny data. So these data were testing a monkey, magnocellular retinal ganglion cell showing it different sine waves of different intensities of contrast at four cycles per degree. And as you increase the contrast, you get this linear range. If we plot it on a linear axis and it probably should be plotted on a linear axis, you get this linear region and then quite an obvious kind of saturating out effect. And our model actually, if we fit the model using the crispening data to get that 15 to one contrast, it creates a response curve at four cycles per degree that almost exactly matches the Darrington and Lenny testing of monkey retinal ganglion cell responses. So this green line is our model's response, predicted response. And it follows very nicely this linear region and then saturating out. And both, so both Darrington and Lenny and our model predict that at four cycles per degree, the contrast saturates at about 0.2, which is quite interesting. So a really striking thing about this model is that it's throwing away vast amounts of information. At least it seems that way. So it's good, throwing out information is really useful because it's compressing the image quite dramatically. And image compression is gonna be really valuable for processing and dealing efficient use of neural structures. But all of the gray here is being lost because it's subthreshold. All of the blue here is effectively being lost because it's oversaturated. How can you throw out so much information without causing some pretty weird effects? Well, actually one answer is that the information isn't quite being thrown out. There's a lot of overlap between neighboring spatial bands. So I can illustrate that with this image here. So if we take this input image, again, apologies for the Moiré effects, but you've got large low spatial frequencies on the left here and very high spatial frequencies on the right and then contrast load at the top. So really basic image just to test the landscape. Put this through the model and here is the difference of Gaussian responses. So low spatial frequency going through to high spatial frequencies. And the color coding on the right here shows which areas in which areas the model is preserving information. So the green is the areas where the model is showing that the neurons are predicted to be operating in that linear within their bandwidth working happily in their operating bandwidth. The blue areas is where they become saturated and the gray areas is where they're not firing at all. And what you notice actually is that there's a lot of overlap. There's no single point in this space where it's all saturated in all spatial frequencies. There's always a few spatial frequencies that are operating at any one time. So if you take this middle section here down in the high spatial frequencies you can actually detect contrast there and in the low as well. And this is really neat because this can show you how you can potentially see extremely high dynamic range natural scenes or on a high dynamic range TV. How you can actually still see information even by only coding it with neurons that have 15 to one bandwidth. But it doesn't explain everything. You're still gonna be creating compression artifacts. You can't throw away all this visual information without creating some odd phenomena. And that is excitingly where all of these things come in. So we systematically tested over 35 different brightness and color illusions phenomena. So these include things I mentioned before like simultaneous contrast and spreading or white illusions, the crispening effect of course, things that have been attributed to atmospheric or lighting or three dimensional effects, MAC bands and Hermann Greer delusions or lots of cool stuff. There's a huge number of things. But basically everything we would throw at the model it qualitatively predicted it. So we came up with a model that was just fitted to trying to predict the crispening effect and neurophysiological behavior. And we've come up with a model that as a byproduct actually seems to predict a bunch of these phenomena. It wasn't our explicit goal to match them but matching them was quite neat. So here for example is Adelson's shadow illusion, quite a common one. For those of you who haven't seen it this gray square here is an identical gray level to this one here which is almost impossible to believe. And shove it through our model and sure enough it predicts that this square should look lighter than that one. You can kind of get the same effect by taking out the low pass so taking out the low spatial frequencies but there are lots of other illusions that I could show that that doesn't work on. So it seems robust in doing illusions. Another nice thing about the model is you can actually interrogate it quite well. You can work out why you're seeing the phenomena that it is seeing. So how is the crispening effect working? How can that be explained by this model? Well, if we have the input image here on the left here we've got that same clipping image with the same coding of blue being areas that are oversaturated. Green is where the model is happy and working linearly and gray is where it's all sub-threshold. The model predicts the effect nicely it's predicting that this graph is showing the areas of highest step in contrast between neighboring grids and for the black background it's highest on the left and for the white background it's highest on the left. But if we zoom in slightly we can actually work out how this is working. So where you have in the middle here we've got, so the middle of the image we've got grays that match the background gray quite nicely. And you'll see here there's lots of green. So that means that these grays are working within the dynamic range. The grays are close enough that the model isn't oversaturated. So here at the white end it's not oversaturating and here it's not oversaturating in here a bit. Whereas we're getting a lot of saturation in other spatial bands. And this varies between the spatial bands. And when you sandwich these things together it shows why contrasts that match the background you're better at seeing them because more of your filters are working within their bandwidth than otherwise would be at other bands. And so again this hopefully helps explain why we can see these incredibly high dynamic range natural scenes or on TVs. We're splicing together from a bunch of very minimal bandwidth but very dynamic range sensitivities. Shove them all together and you can actually see a lot of detail at incredibly high dynamic ranges. The, I can't pronounce this, the Chevelle staircase. Illusion is a nice one to look at as well. Apologies for my French. So the idea with this illusion is really neat. You've got a bunch of different steps in gray level. So this is just a square step down from dark to light. And if you take this rectangle and flip it 180 degrees, that's what you have here. It looks completely different. It's amazing. Now it looks like a series of gradients rather than blocks of color. So how is this working? I'm sure enough the model does predict it quite nicely. Here for example, the gray line you've got the zigzags of gradients and on the green line you've got the steps. And it does the controls of this illusion quite nicely too. The model explains this illusion because actually the staircase and the lower thing here is below your contrast sensitivity function at quite low spatial frequency. So this gray here, you see this is gray, you're not detecting the gradient at all here. Whereas you are detecting gradient even saturating out in the upper one. So this can start to help explain why you get this kind of effect. In fact, it can explain it fully. Now, so far I've just been talking about brightness phenomena but it's interesting to talk about color as well. And as a visual ecologist I'm often taking photos of things, taking spectra of things using a white or a gray standard. Every time you do that you make the assumption of the von Kries assumption that of global color constancy. But papers like this and others show that that's not true. Color constancy is a function of local and global effects. And so it's really nice to try and get around that because that's a really key part of, imagine how important this must be in signaling. So color constancy illusions, this is a common one, this Lotto's cube. In this illusion on the left where it's all yellow surrounds the blue, what looks like blue tiles they're actually neutral gray and on the right here what looks like yellow they're actually an identical matching gray. And the surrounds make them look the colors they are. So this is demonstrating local color constancy failing global failing in favor of local on your screen at the moment. So it's a really powerful effect and sure enough the model can simulate this nicely. It predicts that the ones that you would have guessed were yellow or yellow and blue or blue, et cetera. So we can have a look at that but it also allows us to kind of combine chromatic adaptation and color constancy together. So chromatic adaptation means the global illuminant changing color and your eyes adapting with it. Take this scene of a forest I mentioned earlier when you're walking around the forest everything is pretty much green. How do you work out the white point in the world that's green? Well, the neat thing about this model is this is a purely feed forward model. There's no feedback. There's no point at which there's local normalization going on like there would be in the retinax model. If you multiply the scene, the red channel by five or the blue channel by five you get pretty much the same output. Overall, the average scene still looks green and the blue bells still looks blue. So this kind of reconciles hopefully chromatic adaptation and color constancy with a purely feed forward model which is quite neat. And it can explain other chromatic add chromatic illusions as well like this Brown and McLeod illusion where these colored squares against a gray background they're actually the same colored squares are against this checker background. The average of all the background colors is the same in both cases but the ones at the bottom look like more saturated to the ones at the top and the model can explain this quite well. And it also predicts that you should have these interesting kind of effects where the top looks more green than the bottom within each rectangle, which is neat. So as a visual ecologist I'm really interested in models that I can use for non-humans. So humans obviously have the best data but for me I often wanna know what the world looks like to a bird, for example. And if you look back at this contrast sensitivity graph you'll notice how birds have so much lower contrast sensitivity than humans. Yeah, this is about 10 times lower peak contrast sensitivity than humans. So what on earth would the world look like to them and how would our model look for them? And so that's another hopefully if the model is generalizable enough we'll be able to adapt it to work with non-humans. And if that's the case then the world really does look quite dramatically different to them. Take this quick example of that model human would see much more detail than the kestrel but those warning colors are actually sticking out nicely to the kestrel. And you can even look at distance dependent effects. And with the kestrel from further away the distance dependent effects would be probably much more powerful for the kestrel. And you see it's losing it's washing out all of the high contrast information from behind where it gets dark can't see into the dark bushes at all. So to summarize this model that we've developed it tries to integrate crispening effect and contrast sensitivity to have a perceptually uniform color space that also takes into account the fact that neurons have a limited dynamic range and uses spatial information to work with that. So and makes assumptions relating to coding efficiency. Squeezing all this information through neurons with limited bandwidth. And there's a chance hopefully that these are very generalizable principles and that therefore we will be able to use adapt this model for use in non-humans as well. So I really don't want to sell this model as a unifying theory of vision. No model can ever do that. And no model is correct and some are useful is the mantra that we should all adopt. But being based on principles of efficient coding hopefully it does make it quite universal which remains to be tested. And it does reconcile all of these weird phenomena with perceptually uniform contrast space which is quite unique. It's super, super fast. Like neuraly the model can be predicted with a single layer of neurons that different weightings of input bandwidth limiting of the neuron itself and then different weightings of output is all this model ever needs. So it's somewhat neurologically plausible. But there are of course limitations to the model as well. It makes a few assumptions. The key assumption that we don't know whether bandwidth is uniform and that dynamic range therefore shifts with spatial frequency. So that'd be really fun to find out. And we do a lot more parameterization. We need more behavioral models to work out what's going on with the chromatic side of things. So that's it. Thank you so much for listening and I would be very happy to take any questions. Thank you very much, Jolion, for this lovely and I have to emphasize that very clear presentation like for someone that is not so familiar with the field of modeling, the color appearance. I have to admit that I understood quite a lot. I already went ahead and I posted the zoom room link at the chat so people can join us already and as a reminder to the audience I will be stopping live transmission maybe in five minutes from now. So first question that I have is I will start with the fact that you mentioned that this model is a purely fit forward model. Is it like there are three different questions but like the first one is like is it surprising to you that you can explain different delusions with just having a fit forward model? Like would you expect some sort of, I don't know feedback would be necessary like for example, for the time scales of you know this phenomena and how they occur. So feedback is so it's a common models at the moment of the work with you know chromatic adaptation, things like the retinax model they'll apply effectively local normalization against a local maximum. So that's you know local you need to divide by number there. So you divide a color by its local maximum to get retinax style models and they work quite well you know. The problem with anything that requires feedback loops is that it would be slow of course neurologically slow. And there's quite good evidence that you're able to see these these color constancy effects really quickly. Like so fast that there would be limited time for you to have lots of feedback loops going on at a neural level. So yeah, I found it surprising that you could have a feed forward loop that always suggested that a green forest was green even if you give it a red input. I find that odd. How can that be so? But it does, it's neat. And you can explain how that works. Yeah, if you feed a red image in to a feed forward loop why should you always get a green image out of Blue Ridge? I think- Which takes me to a second question. So in your model you say that for different special frequencies you have the same uniform bandwidth. Do you have a different bandwidth for different signal opponents like red, green or blue, yellow? Do they have the same bandwidth or this is all? Yeah, yeah, really good question. And that's where that last little comment I mentioned about where we struggle to parametrize the chromatic side of the model. So what we need is... So in short, no, we assume that they're different. We assume that they're quite different. So we assume that the bandwidth, let's say we could measure the bandwidth using the crispening data for luminance but when it came to chromatic data we don't have chromatic crispening data. There are crispening data but they're not the right kind that we would need. So we did an arm-way via assumption based on processing of natural scenes to come up with sensible numbers but they do need behavioral validation. So no, the numbers that we ended up using for chromatic side of things were much lower. So for example, about five or so a bandwidth of about five. Because if the bandwidth was much higher than that then the neurons would never be saturating in a natural scene and that is not very efficient. I see, thank you very much for that. So as there are no questions currently appearing in the chat but people are already joining us in case someone who is here wants to ask a question they can raise their hand and they can give them the opportunity to speak until we stop the broadcast and I officially wave my moderator rights. One last question that I have Jolion from my side is like I think a quite general one I'm going back to purely feed forward models. So because you say this is more biologically plausible I was wondering whether, you know like we know that the size of the receptive fields like the extent of the center or the surround can change based on the lighting conditions and so on especially for the surround like it can be much, much broader. Do you account for this in your model? Or would you think this would be important to account for in future attempts to improve it? Absolutely, so we have not accounted for that. We have just used the standard assumption of the surround being what is it 1.6 times larger. So that would I'm sure dramatically change stuff and it's easy for us to simulate what effects that would have but there are yeah, there's a number of parameters there such as that that you could tweak but for simplicity we've just started off by our rationale was what is the simplest model we can come up with that does the job rather than we can add complexity later but starting with the most parsimonious the most kind of generalizable model because particularly I'm interested in doing that for non-human vision I want something I can use for a non-human where we know so much less about the visual system itself. Of course, yeah I have many questions about aquatic animals as well but even that some people might be shy and don't want to appear live and that's their question I will be stopping the live broadcast now so we can have like a more informal cheat set with people that are here and I would like to thank you once again Jolion for this wonderful talk and for honoring us and giving a talk in our series. Thank you very much, pleasure. Thank you. So we are officially offline