 Great, and I think we are live. Wonderful. So, hello everyone, and welcome to another World Wide Neural Talk in our Sussex Vision series. My name is Nora, and I'll be hosting our talk today. At first, I would like to thank again all of the World Wide Neural Organizers for coming up with this whole idea and making it possible for all of us to enjoy such an enormous variety of talks and next to cheese during the spring and summer. It has been really wonderful to hear all these different kind of talks. Next, I would like to remind our audience that you can send your questions directly in the chat on the YouTube, and we will collect the questions and ask the most popular ones right after Todd's and Emily's talk. So, today we have pleasure to hear a joint talk by Todd Thiel from University of Toronto and Emily Cooper from University of California, Berkeley. Todd did his PhD in Sean Lottery's lab at the University of Oregon, studying the neural circuitry of chemofaxis behavior in C elegans. During this time, he also got interested in zebrafish as a model animal, and he did a postdoc with Herwig Bayer investigating motor control circuits in zebrafish. Now, Todd's own group is studying zebrafish neural circuits involved in sensor motor integration. Emily did her PhD at University of California, Berkeley with Martin Banks on depth perception in real and picture environments. After getting her PhD, she conducted postdoctoral research at Stanford University and then joined the faculty at Dawson College in 2015. Emily moved back to Berkeley in 2018, where Herwig examined the principles underlying biological vision with applications to both basic and translational questions. So, today the topic, our first joint talk is understanding the visual demands of underwater habitats for aquatic animals used in neuroscience research. So, Todd and Emily, you're welcome to start. Great. Well, I want to thank Nora and Tom for the invitation and the organizers of the World Wide Neuro series for this exciting opportunity to talk about this project. And again, this is a joint talk between myself and Emily Cooper, and we're going to be talking about understanding the visual demands of underwater habitats for aquatic animals used in neuroscience research. So part one of the talk I will give, and this will be a project overview. We'll talk about some of the field research we have done and some engineering efforts related to future aspects of the project. And in part two, Emily will discuss an analysis of natural scene statistics from zebrafish habitats. Okay, kind of a big picture slide here. So, when trying to understand how a perceptual system works, taking a neuro-ethiological approach is often quite useful. And so what you want to understand are a couple of aspects, particularly what behaviors that perceptual system helps to shape and drive, and also to have an intimate understanding of the environment in which that perceptual system evolved. So when studying vision, this of course involves understanding natural scenes. And so the theoretical space of visual imagery is effectively infinite, but the images we see in the natural world have many common features. Statistical regularities in natural visual signals can be exploited by the visual system to efficiently and accurately perceive the world. So here we just have a terrestrial and aquatic scene, gorillas in the jungle and fish in a coral reef environment. Now, the ultimate goal for our project is to understand the visual processing of optic flow in natural scenes. And so optic flow can be produced either by self-motion or motion-induced by environmental factors such as wind and current. And during self-motion, optic flow can be quite useful for an animal to understand how fast they're going, how well they're navigating in an environment. And for fish and also animals that fly, external factors such as wind and current are able to move animals such as these quite strongly in their environment. And so these animals have quite effective behaviors for stabilizing themselves within dynamic environments. And so I'll move away from natural scenes right now and talk a little bit about these behaviors. So optic flow induces stabilization behaviors, in particular the opto motor response and the opto-kinetic response. And so when animals, in this case, do a rotation, a counterclockwise turn, they experience this kind of visual flow field motion moving to the right. During a forward translation, they would experience motion moving backwards and in a receding motion they would experience a flow field moving forward. Now, if an animal is stationary and being moved by the environment such as a fish in a flowing river, they would be moving backward due to the current, but the world would be moving forward. And so fish have a tendency to want to follow the optic flow, presumably to reduce them being swept downstream. And so this can be studied in a lab and here I have a movie about 20 years old, quite a famous old movie from Mike Orger in Hervick Byers lab showing larval zebrafish in a runway. And this movie is not playing very well, unfortunately, it was playing well 10 minutes ago, but you'll have to trust me here when I say that these fish are following a moving grating to swim quite robustly with the direction of optic flow. And Mike has also showed in subsequent papers that both Fourier and non Fourier motion detection is used by these animals. Hold on. Having a sharing issue. Okay, I'm back. I think. Nora, can you see my screen? Currently not. No. No, I can see your entire screen. Yes. Okay, good. So here's this video again playing again, not so well. So anyway, I will move on to the optic kinetic response. So here this is a video from the Aaron Berg lab showing a larval zebrafish that is head mounted and as the video plays, optic flow is being presented from right to left and you can see the eyes producing smooth pursuits, followed by saccades to reset the eyes and allowing them to continue pursuing the object. So head fixed preparations for the OMR can be used for calcium imaging studies and so you can either have a head, head fixed tail free preparation or in this case I'll play a video from this paper in 2012 where the ventral roots are recorded from in a fish that's paralyzed with bungaroo toxin, and this fish has control of the visual stimulus in a closed fashion, such that outputs from its motor neurons allow it to control the visual stimulus. So this type of assay was used in conjunction with Holger and calcium imaging to produce the first cellular resolution brain activity maps in any animal. And what they found were many visual responsive areas in the brain and I'll highlight area for here which denotes the pre tectum. And this and other studies have found that the pre tectum plays a very important role in the optimal response and the optic kinetic response. Recent research on the zebrafish OMR so this past year there's been several papers that have found important clues about how fish are processing and responding to optic flow. And this paper from the Portuguese lab, they presented zebrafish with a variety of different gratings with different grading sizes and spatial frequencies, and this is showing the average of all those visual stimuli at the time when a fish produced about and what you can see is there was this pronounced light to dark transitional zone. And so what they included from this is that light to dark transitions are the key driver of automotive behavior. The Aaron Berg lab had also a paper this year where they used a cylindrical stimulation arena where the fish was placed in the middle to try to figure out where in visual space, the motor stimuli are most effective at driving swims. And so this plot is showing different zones where optimal stimuli were presented and this point here is the front of the fish and these are lateral positions. And you can see hot regions in the lower temporal field for the fish and so these zones were found to be best at driving swims and so this is a good agreement with what the Portuguese group found. Rubens lab also did a recent study where they showed that fish produce behavioral and neural responses to naturalistic third order motion cues. This was done using so called glider stimuli so instead of presenting gratings they presented these triangular type stimuli, which contain third order motion cues and so fish have a very sophisticated motion processing ability. So to conclude here we know a lot about the OptiMoto response, stimuli, the drive it, brain centers, the process, optic flow. But one thing that's lacking is an understanding of the underwater visual statistics for where these animals live. And so this is a video showing optic flow, not optic flow but the visual environment, again not playing very well, of fish habitat in Canada. And what you might be able to see is that in addition to the complexities of the rocks and vegetation that is around there's also these rippling effects that occur. And these are called caustics and optics and they're produced by light rays being refracted at the surface and then being reflected along the bottom of the aquatic environment. So our project overview and again this is a collaboration funded by the Human Frontier Science Project. It's a collaboration between the Ehrenberg Lab and Tubigan, the Cooper Lab at Berkeley, the Gen T Lab in Maryland and my lab in Toronto. The first phase was to record natural videos in a very immersive way using a 360 degree camera. We then want to quantify visual statistics which Emily will talk about. And then in the future, look at brain activity when presented, when fish are presented with these videos and also look at behavioral outputs. We also have a comparative approach for this project, so we're of course using zebrafish. We know because we know a lot about their optic motor behavior, plus they have all these fantastic tools for systems neuroscience. But we're also interested in this African cichlid, acetate tilapia pertoni. We know a lot about their behaviors in nature. There's been decades and decades of work on these African cichlids. And one of the goals is to port some of the tools such as calcium imaging into the species. And I should say there's a large trend right now among zebrafish researchers looking at other types of fish to study. The Bayer Lab is looking at shell dwelling cichlids. The Orger Lab is looking at a giant Danio and the Douglas, Albene, Jukovic and Bass Labs at Danio Nella. So this is a video showing young pertoni. And for those who haven't seen cichlids, you'll notice these are four weeks old, but they're having quite dynamic social interactions. Again, another video. Sorry, not playing wonderfully. But they have different swimming patterns. They have territoriality and quite different than zebrafish at this stage, so we're excited to start working with them. And so to begin to, so back to natural scenes, first aim of this project, we took some inspiration from Jacques Cousteau, said the best way to observe a fish is to become a fish. And so my first idea is for getting natural videos from the scenes of these species. A drone was an initial thought, maybe we could tether a camera to the bottom of a drone, or maybe tether a camera, or mount one on top of one of these small aquatic drones. I had a sobering conversation with Tom Bodden who said, you know, he'd been to Indiana and he said, the water is quite shallow. This type of thing would end up being kind of a titanium for our project. This will not work. So we came up with another idea and that was to use a robotic system to move a 360 degree camera around an aquatic environment. And so we wanted to get this camera as far away from the components that were driving its motion. And so the core of the system is an XYZ gantry system, each gantry is 30 centimeters of travel. And so these are the gantries here. The camera is connected via carbon fiber booms. There's a rotational motor that sits here so we can rotate the camera with good precision. And the whole system sits on carbon fiber tripod legs so we can get it into awkward positions in the wild. It's all controlled via a field laptop and powered by a small generator. And we named this thing Ansel for automated natural scene exploration laboratory. And so here is in the flesh what it looks like I'm one of our field trips. So a little bit about the camera we use so we use this insta 360 one X camera came out just in time for our studies. It's a small camera that has two, two cameras one on each side that have wide field of view. It has nice high resolution and goes at a nice temporal frequency 100 frames per second. And it comes with this nifty little dive case so we didn't have to worry about coming up with a water case for it. And her post acolonia did lots of calibrations of these cameras prior to us taking them to the field, they did a spatial distortion of each camera internet in in air and underwater rotation and translation between the two cameras spectral sensitivity luminance linearity modulation transfer function and they also developed a video processing pipeline to linearize pixel intensity values blur to remove compression artifacts equalize angle at angular resolution based on spatial calibration and then generate rectilinear images and so here is a undistorted virtual projection on a sphere. So our plan for the field was to record videos from a diversity of habitats for each species vegetated and non vegetated flowing water still water clear and turbid. We then wanted to use the robot to get ground truth camera trajectories and aquatic and terrestrial environments. We were doing translations of x, y and z, x, y and x, y and z at different speeds so we did 1040 and 60 millimeters per second. We also did rotations of 520 and 50 degrees per second, and some combinations of translation and rotation. And finally we took stationary videos at all locations. So in August of last year we went to Zambia, we went to one of these big rift lakes, which is a home to African cichlids and went to Lake Tanganica, the southwest southeast corner, and we sampled from six sites of various diversity and habitat above Colombo Falls to other river sites and three sites along the lake shore. In total we got 364 videos about 300 gigabytes of data. Just some pretty shots from Zambia, here's where we stayed, cichlid researchers go there at least a few times a year and have really set up a wonderful operation down there. Here's the robot and some different habitats we recorded from. It's a postdoc, bank attach, and a new grad student, Nicholas, operating the robot in one of the scenes, one of the habitats, which was actually in a harbor. And I want to point out Walter Salzburger from the University of Basel, who was a tremendous help in the logistics of getting us down to Zambia and helping us find cichlids in the lake. Alex Jordan was also along from the Max Planck Institute doing his own research, and this is the Village Cheap who we met with to get approval to do our research. So here's a movie, hopefully it plays well, of Ansel and Action and the Colombo River. You can see it's doing XYZ trajectory. You can hear in the background the generator which is powering it, so we have all the cables running into the boat and here's Nick running the laptop. So we typically were out of sight for two to three hours, but set up and doing the full set of trajectories. So here is a movie, a 360 movie of an XYZ rightward trajectory at 60 millimeters per second, it's slowed down a bit. The movie has started. And now it's kind of lift off, we're moving in this XYZ plane. And I am crudely, digitally just moving the camera around the camera wasn't actually rotating, but just to give you a sense of the 360 degree environment. Again, I'm just panning around. And I think now I will show you just that we have a nice full view of the bottom there. The cameras worked quite well, the robot worked quite well so we think we have a nice set of videos from these habitats. And the robot is actually off in the distance here which you can't see the legs due to the vegetation. This was a particularly nice site. We had the full leg extension so we were in a couple meters of water here. And the movie overview of the site here is a reeded area very close to one of the villages. And here is the underwater scene from that location. And we saw lots of cyclids. This is a adult female protonine. They were often just munching on vegetation, but we probably saw up to 50 in this location. Okay, that was Africa on to India. So we went to India in October of this year. We are certainly not the first to do this. We were lucky to have Dr. Runa Chalam and his student, Dr. Vijay Kumar along our on our trip to guide us to zebrafish locations so he has previously sampled throughout India. Dave Parichi was sort of the first western zebrafish researcher who in the 2000s sampled several locations in western India. I forgot to mention David Eds in the 1990s who sampled lots of zebrafish habitats in Nepal. And of course, Tom's group went to India a few years ago, West Bengal, and Friedrich Drutfeld has also been to India at sites also in West Bengal. Recently, Benjamin Jukovic has been looking at Daniela sites in Myanmar specifically looking at acoustic behaviors in the wild. Now here is where we went in a psalm so Gala hot tea was our, our kind of home based and we branched out from there every day to different sites trying to get into very natural, natural sites, forested regions. We went up to the border of Bhutan and also some lowland regions. In total, we were at nine sites and captured 245 videos about 400 gigabytes of video data. Some photos of the sites we went to rocky streams. Here's more of a rice paddy zone. This was a small stream a little more turbid where we recorded in the river and then did a set of terrestrial movies. This was a little less by in a vegetated region. Some of the zebrafish are collaborators caught for their studies. We saw zebrafish everywhere, only at one site did we're unable to find zebrafish, usually within 10 minutes you found them. This is quite a large zebrafish to find in the wild typically they were skinny. Here's an example of the robot doing a rotation or flowing stream and just another rice paddy scene. Some more scenery this is near the Bhutan border I'll put a little star Nora was along on our trip doing her own studies. Here we are setting up in a rocky location. Here I am in a lowland forest stream a more turbid stream. I just wanted to show you so this is a forest stream and some hilly region of Assam, the water looks quite clear. We found zebrafish here as well we also often saw this species of ghillie fish. So here I want to show you this is a video of fish swimming against the current in the stream during the shadows of the robot. So these fish typically tried to stand still within the stream and this a fish ecologist friend has informed me that this is a common strategy for fish that just want to stay and have their mouths open and just basically collect the conveyor belt of food that is flowing down down the stream. Now from that same forest stream this is now underwater and I'm going to show you a rotation in the stream and you can notice it's quite turbid. So because we're looking through more water than we were from the surface it's quite a shallow zone. We notice that turbidity can can be deceiving when you just look from the top and so here we do a rotation. The sun was quite low at this location so that was the sun and this rippling this caustic as I mentioned earlier this effect of light being refracted by the surface and reflecting along the bottom produces these light and dark transitions. So this is going to be a video of just zebrafish swimming along the lake shore, not a lake shore stream shore I should say, but you see lots of power within these ripples moving towards the shore and so one potential hypothesis is that these ripples look a lot like you know the optomotor stimuli we would produce in the lab, that these could be driving fish towards you know shallower waters along the sides of the stream, potentially where protozoa or prey are accumulating. So, just speculation there but I think an interesting hypothesis. So, we potentially caught a wild wild larval zebrafish or juvenile. This was a location a rice paddy this was sort of the micro location where VJ one of the Indian scientists collaborators was able to capture this guide so I'll play a video. I'm not playing great but you have to trust me it's doing swims that look a lot like what we see in the lab for larval zebrafish and if you look closely at its melanopore patterning. It looked very similar. So possible that we that we observed a larval zebrafish but I think in the future more field work would be very cool to go and have a really focused effort to find larvae. Okay, back to the project overview. I talked about our recording of natural videos. Hopefully in the second app is going to talk about quantifying visual statistics from these scenes. And, of course in the future we have these goals to look at brain activity and behavioral responses to these natural scenes so I'll talk a little bit about our efforts towards that goal. Okay, Arunberg lab Tim and Ari to begin. They've developed this really cool system for presenting these natural scenes to fish while performing calcium imaging or doing behavior. So here's the imaging setup you have a fish mounted within a sphere glass sphere that could be frosted to allow back projection. The sphere allows for the immersion of a high numerical aperture objective for calcium imaging. And the way it works is there's a light crafter that projects the natural scenes onto a mirror that then projects the scenes onto a pyramidal mirror splitting that image into four quadrants. These parts of the visual scene are then relayed through a lens and a mirror and then wrapped onto the sphere. And so here's just a ray diagram showing the optical path for the setup. I should say that this is near completion there's still some fine tuning that's being done, but we hope in the next coming months to be able to start imaging. There's an early video showing a camera being kind of dunked into the sphere when a grading is being projected you'll see some artifacts that that they're working on removing, but the system seems to be working quite well. And here we projected a natural scene onto the sphere. And so this is actually a white balloon stretched over the sphere to try to increase contrast of the projection. So this is a movie from Zambia we have a sick lead moving across the scene. And so so far things are looking quite promising. Another bit of engineering is a bit of a genetic engineering done by Scott Junty's lab. And so there were no GCAM expressing sick lids when we started this project but Scott has been a pioneer of transgenic approaches and sick lids. And so he's generated a stable line that expresses GCAM 7B throughout the brain we chose 7B because it's one of the brighter variants. And we hope to image even at later ages. But we have quite quite nice expression. He also generated a terrazinase mutant. These lack all pigment. And this will of course be important for two folks on calcium imaging. And I will say last night, we received our first shipment of these sick lids from Scott's lab. This was quite an effort. The Canadian US border is problematic right now due to the pandemic. But by happy we have seen very happy fish in our facility right now. This is great news and we hope to start working with them shortly. Okay, I will now pass it on to Emily for part two. All right, so I'm just going to take over the screen share. Nora, can you confirm that audio and video all seem okay? Yeah, it all looks good. Perfect. Thank you. So thanks everyone and thanks to the organizers for putting together this talk. I'm going to take over the second part today. And my goal is going to be to dig a little bit deeper into the quantitative aspects of the data set that was captured in the field work that Todd just described and present some preliminary analyses. And these analyses are really just going to be focused on the habitat of the zebra fish because that's where we've started off. And this is all still ongoing work. So I'm going to try to give a flavor of a variety of analyses that we're doing and what we expect to learn down the line from these data. Just as a reminder, zebra fish are a popular model organism in neuroscience because of their amenability to advanced imaging methods. They have two eyes that they use to sense visual information in their environment. And here's just a simple diagram highlighting some of the key anatomical features of the zebra fish eye. And these eyes are positioned laterally on the head and provide a wide field of view of the world around them. And although their view of the world is wide, it's relatively low spatial resolution estimated around one cone per visual degree at the larval stage. Visual signals elicit a range of different behaviors in this organism. Now as Todd said, understanding vision and visually guided behaviors in model organisms requires investigating the relevant neural systems and circuits, understanding the animal's perception and the animal's behavior, but also understanding the visual demands placed by the natural habitat of the animal. And understanding that habitat is important because over the course of evolution, we believe that animals have developed a set of perceptual systems that are going to be optimized to perform within their specific environment. And so motivated by that particular observation, there's a large body of prior work that's directly examined the visual statistics and patterns that are present in imagery of natural environments. Here I'm actually just listing a relatively small slice of the great work that's been done characterizing statistical regularities in natural imagery. And this prior work has really nicely shown that animals have visual systems that are well adapted to the specific demands of their habitat. But this work has primarily been driven by insights about terrestrial environments and terrestrial animals. And by comparison, relatively less is known about the visual demands placed by natural aquatic habitats, like those where zebrafish and cichlids are native. And so that's the gap that we're seeking to contribute to filling with this project with a particular emphasis on motion perception. So with that in mind, I want to dig into the new data set. And the first thing that we noticed is that the environments we captured in India were quite visually diverse and different from each other. So here I'm showing four example images from four different field sites. And there are a couple commonalities you can note like the water is relatively shallow and the floor is kind of sandy. But these environments were also pretty variable in terms of the turbidity of the water, the amount of vegetation and the relative stillness of the water where the zebrafish were found. And that diversity is consistent with prior work that's been done characterizing the range and scope of zebrafish habitats. So to examine statistical regularities that were particular to this ensemble of aquatic habitats, we also collected a comparison data set of nearby terrestrial and super aquatic sites. And so as I start digging into some analyses, I'm going to do basic comparisons of the visual properties of the aquatic habitats to the nearby terrestrial environments. And for clarity, I'm going to use this water droplet and leaf icon to indicate samples that come from aquatic and terrestrial habitats respectively. As a reminder, each sample in the raw data set is a 360 degree video captured at 100 Hertz. And the camera is either held stationary for the duration of the capture, or the camera is rotating or translating or both in some predetermined trajectory. For the first analysis that I want to show we started by sub sampling from the 360 videos in space and time to measure some basic visual statistics. So here we took samples that were 75 degrees by 75 degrees and that lasted for 10 seconds. And then we just selected the green channel of the video and we linearized the pixel values and we removed the lens distortion. And we selected this green channel, because it has pretty high sensitivity near the zebrafish long wavelength sensitive cone, which is known to be important for driving behavioral responses to visual motion. So this sampling resulted in 34 terrestrial samples and 43 aquatic video samples. And again, we'll initially be looking just at those captured with the stationary camera. So the first result that I want to show you is to look at just the basic histogram of light intensity levels in these samples. We'll start with showing the terrestrial sample in these plots on the x axis we show the normalized light intensity, which is normalized by divisively normalizing the average intensity of each clip, such that values of one are equal to the mean of the clip, greater than one or brighter, and less than one or darker. And then on the y axis is the probability density. So here now each line on each thin line represents an individual sample, and then the thick line indicates the average across all of the terrestrial samples. And we observed that the average distribution of light levels in the terrestrial habitats tended to be unimodal and had a pretty heavy tail in the positive direction, which is overall consistent with prior work on natural visual statistics. So now with this method we can compare this histogram to the aquatic videos. And we saw that the aquatic histograms had qualitatively similar properties, but on average the shape was quite different. These results are consistent with one prior study of aquatic environments conducted by Balboa and Juwatz, which suggested that intensity distributions from individual aquatic images tended to have overall lower contrast as compared to terrestrial. To compare these histograms more directly, I selected just one feature of the histograms that I want to talk about today, and that is the amount of skewness or asymmetry in the two luminance distributions. So skewness quantifies how lopsided a distribution is, and it's helpful for understanding luminance asymmetries in natural scenes. So for example, environments with negatively skewed histograms will have a dominance of bright points, that is they'll be relatively more values that are greater than the mean. Environments that have zero skew are going to have a balance of points that are brighter and darker than the mean, and then environments with positively skewed histograms will have a dominance of relatively dark points in the scene. So a large body of prior work suggests that natural scenes tend to have positively skewed histograms, and therefore a dominance of features that are darker than the average luminance in the scene. Interestingly, that bright dark asymmetry or that dark dominance has been shown to be exploited by different terrestrial visual systems, ranging from flies to humans to efficiently encode spatiotemporal contrast and motion in natural environments. Interestingly, quite recently Yildizoglu et al provided evidence for similar adaptations in the zebrafish visual system. So it's particularly timely now I think to investigate the prevalence and pattern of bright dark asymmetries in a typical zebrafish habitat. So what I'm going to show you here to look at that is a histogram of the skewness calculated from each sample, and then we'll compare the distributions across the terrestrial and aquatic habitats. So we found that samples from both habitats on average had significantly positive skew so more dark points. But we also saw that the terrestrial habitats tended to be more skewed overall than the aquatic. And to get this in some units that are a little easier to think about we next quantify this difference in terms of just the overall proportion of light levels that are below the spatiotemporal average and this is sort of a course definition of dark dominance or dark contrasts. And when we looked at the data that way we found that the average was about 71% in the terrestrial habitats and 60% in the aquatic habitats. So both significantly more than half of the points in the samples. So taken together these two bright dark asymmetry analysis suggests that the zebrafish habitat has some nice commonalities with terrestrial habitats which have been more well studied in prior work. But we can also know that there are clear quantitative differences between the two. And so ideally we want to understand whether and how these differences have implications for visual cues in the environment. So for example, you know ability to detect predators and food, particularly in this project as that relates to visual motion. So by way of example, if we look at visual input like this one that I'm showing here from our data set I'm going to start playing the video. So in that yellow circle there there is some behaviorally relevant information, specifically there's a group of fish that's swimming by. But it appears really low contrast in this visualization because in the video, the intensity and the color of the fish is very similar into the intensity and color of the background. But you could imagine that having a visual apparatus that's well tuned to the statistical regularity of your environment might facilitate detection of even weak signals like this. And so we wanted to look at that question next. So we did this by examining luminance response nonlinearities actually optimal luminance response nonlinearities and how they can maximize the entropy of a neural representation of light intensity values. So that is how can we transform natural visual patterns to maximize the expected value of visual information. So this is an analysis based on information theory, and I think it's best illustrated by a quick little example from photography. So if you imagine that you wanted to take a nice picture for your personal website, but you ended up capturing a visual image where a lot of the important information had relatively poor contrast kind of like the school of fish had poor contrast in the video that I showed. If you look at the image histogram of pixel intensities, you can see that what's happened is all your intensity values fall in the middle of the possible range that's mapped out here from zero to 255. But performing an operation called histogram equalization can help you spread the wealth of your encoder and get the most out of your image data by better matching the statistics of your input. It turns out that a simple way to do this is to apply a point wise nonlinear transformation to each pixel value that follows the shape of the cumulative probability of the image histogram. So for example here if we treat the image histogram as a discrete probability distribution of light intensities p of x, we can calculate the cumulative that I have shown on the right. We can apply that as a point wise nonlinearity to each pixel in your image. It gives you a new image where every value is equally likely, and the differences in your signal you can see are much easier to detect. Now importantly, changing this point wise transformation even slightly can have pretty big effects on the output. So here I've just made it slightly steeper, and that results in an image that doesn't really meet our criteria for optimality very well. To return to the problem at hand of natural vision, neural systems, we now are often thought to employ transformations that are similar to histogram equalization, in order to maximize how informative the neural responses. And in the case of the visual system, the optimal form of the response nonlinearity is going to depend importantly on the average luminance histogram that is expected across the visual environment. This principle was used really elegantly in classic work by Laughlin, which is summarized in this figure here, where he showed that the contrast response nonlinearity of large monopolar cells in the eye of the blow fly are well matched to the optimal nonlinearity that was calculated directly from measurements of contrast levels in their habitat. And so using the same analysis, we're going to ask whether the optimal response nonlinearity differs between aquatic and terrestrial habitats. A reminder here we already have the average luminance histograms that we need for this analysis, I've showed you these already. And if we integrate those to get the optimal response nonlinearity, you can see that within the range of intensity values that we have examined, we obtain two nonlinearities that appear quite different from each other. To intuitively look at how meaningful these differences is, is to then apply these as a point wise nonlinearity to an example scene, which I've done here. And if I play this video again, this is that same video of the school of fish swimming by, and you can see that the contrast is visually higher when we apply the transformation that was derived for the aquatic habitat in particular, versus when I apply a transformation that was optimized for terrestrial scene statistics. And intuitively this is going to be because we've stretched out the relevant pixel values a little bit more in the relevant range in the aquatic example. And in fact, when we performed a statistical analysis, we found that each nonlinearity that we calculated was significantly better at maximizing the entropy of the output values for the appropriate environment. And so this analysis suggests that the zebrafish habitat differs sufficiently from terrestrial habitats in terms of intensity statistics to place a different set of environmental pressures on encoding variations in light level across the visual field. But importantly, we also know that these histograms are pretty coarse measure of natural visual input, because they don't tell us anything about the spatial temporal structure of the scenes. And previous studies have done a nice job of highlighting the rich spatial temporal features that are prominent in underwater habitats. And Todd showed some really nice examples on the earlier part of the talk, such as things like rippling vegetation moving through the water and flowing particulates as So now, next, we used a spatial temporal power spectrum analysis to quantify whether the aquatic environments have consistently different composition of spatial and temporal frequencies. I'm not going to go into the details of how to conduct an analysis like this today because I want to wrap up with showing some cool optic flow data. But briefly, what I'm showing here is the average power spectra of terrestrial and aquatic video samples in terms of temporal frequency on the x axis and spatial frequency on the y axis. And our analysis suggested that one of the key differences in spatial temporal structure across these two types of habitats is that the aquatic environments seem to have more complex temporal structure. Whereas the terrestrial environments seem to have more complex spatial structure within the frequency ranges that we analyzed in this analysis. So in the spectra, for example, if you look at the aquatic videos on the left, you should note the presence of more power at higher temporal frequencies. And then if you look at the terrestrial samples on the sorry aquatic samples on the right terrestrial samples on the left, you should note that there's relatively higher power at higher spatial frequencies. So this type of spectral analysis is really common in natural scene statistics. And it's useful to tell us to give us some insights on global motion statistics and global spatial temporal patterns. But ultimately we're interested in reconstructing and understanding optical flow all across the visual field, because that's important for the visually guided behaviors that we're most interested in. So the last thing I want to show you are some preliminary visualizations of optic flow caused by translational and rotational self motion of the robot that Todd constructed. So for this analysis, we're going to kind of treat the two cameras of our 360 system as being approximately similar to the two eyes of a larval zebrafish. And then we'll start exploring the optic flow patterns that might be typical of the fish's natural visual experience. So here what I've done is I've worked those two camera views into a spherical coordinate system in terms of azimuth on the x axis and elevation on the y axis with the left camera and the right camera mapping to do two different sides of the visual field. And I'm going to first play this clip where the camera here is translating backwards relatively slowly so you can see some motion in the environment but you should also hopefully be able to note that global flow pattern from the camera translation. Now what I'm showing you is we used simple spatial temporal derivative filters to measure the local motion signals across the visual field. And at a set of sample points I'm illustrating the motion direction as a color shown in the color bar to the right, and then in each location the radius of the circle indicates the speed of motion in that direction. And so you can see there's a relatively consistent pattern of opposite motion in the two cameras which is what you would expect from translational motion, but we also see substantial variability across the visual field because not all spatial temporal derivatives are being caused by self motion. You can examine that variability. For example here what I've done is I've plotted a histogram of the directions of each point of local motion in each eye. And you can see that despite this variability across the whole visual field, there is still good information about the fact that there's a backwards translation happening here which is embodied by the tendency of the right camera direction to be opposite of the left camera direction. And we also see a good amount of variability within each camera. And so from our initial analysis, we think it's likely that reliable information about self motion is not distributed evenly across all elevations and azimuths in the visual field. So the pattern that we see during translation, as expected is quite distinct from the pattern of optic flow that's seen during rotational motion, and I'm going to show a rotational video here. The video should be rotating. And this is actually a video from our one of the Africa field sites, but I thought it was a particularly nice example. So I picked it to show today. So same as with the translational video, I can overlay local motion directions and speeds. And you can see that with rotational motion, you get a relatively coherent motion across both cameras in the same direction, but you can still note a good amount of variability, particularly around where the vegetation is in the lower part of the visual field. An example to dig into this further. Now what I've done is I'm playing the video again, and underneath that I'm playing a video where instead of direction of speed, the intensity of each pixel is, sorry, instead of direction of motion, the intensity of each pixel is now the speed of motion. These sort of hot spots where there's faster motion than elsewhere in the visual field, and these really do correspond well to locations where there are caustics and ripples in the water, and which are violating the brightness constancy assumption associated with this type of motion estimation. So importantly, we expect that these natural optic flow fields are going to differ systematically from the fields that are created by idealized synthetic stimuli that simulate pure rotation and translational self motion when the environment is constant and when brightness constancy is conserved. And so in ongoing research in the Arnberg lab, the team is using calcium imaging and a novel noise motion stimulus that they've created to map receptive fields of pre-tectal neurons in terms of their sensitivity to optic flow across the visual field. And I'm showing here just an example of the optic flow receptive field of one pre-tectal neuron from RE's lab. And with the natural motion data set, we're going to be able to assess how reliable these neuronal receptive fields are at representing self motion for natural flow fields. So I just want to wrap up with a summary of the content that both Todd and I have presented today. So we've collected a novel data set of 360 videos from both zebrafish and cichlid habitats. And these include stationary video as well as ground truth translational and rotational trajectories for the targeted study of optomotor drives and visual motion. In initial analysis of the low level of visual statistics of the zebrafish habitats suggests that they differ in their luminance and contrast and power spectra from terrestrial habitats that are more typically studied for natural visual statistics and that these differences likely have meaningful implications for the structure and function of the visual system of aquatic animals. The are environment lab the lab and Junty lab are collectively developing novel imaging and behavioral paradigms to examine these issues from a neural and behavioral perspective in the lab, along with the visual statistics. And in our ongoing work, we're examining how to optimally self stabilize in aquatic habitats with an emphasis on distinguishing between translational and rotational self motion on handling conflicting motion cues that are caused by shadows and highlights and rippling and also on inter species differences. So thanks again to everyone on the team and thanks to the organizers today, our funders and to everyone for their time. Thank you. Wonderful. Thank you so much. This was really really interesting and a great talk. It's nice to see finally the results that I saw actually being recorded in India. So it's really great to see those coming together. So we have few questions that I'm now going to read out loud. So our very own somebody is asking, have you checked if you get very different results from the point why from that point wise non linear histogram matching if you use non green as the input for example red blue channel. Yeah, that's a great question. We have pulled the samples for the other color channels but haven't plotted the optimal non linearities yet, but it's definitely in the plans and I think it would be interesting if they did come out differently. Yeah, absolutely. Great. Tom actually had a follow up on that saying that also your sub sampling, such as that you, you took the parts of the video that shows horizon. Have you looked for dorsal or ventral visual figures as well. Sorry, can you repeat the question. Yes, so also your sub sampling such as that you took the parts of the video that show horizon. Have you looked the dorsal ventral visual fields as well. Yeah, so we, when we do that sub sampling there are a couple of things under the hood and one of them is that if we obtain any samples that have a large number of saturated pixels. We reject the sample because that's not going to be a good representation of the image histogram. And this is not an HDR camera. So in the upper parts of the visual field, we haven't done much sampling because you probably noticed in a couple of the examples. A lot of times the camera ends up getting saturated in that region so right now we're focused on regions where we can eliminate any saturation. So I think that will still will still be able to do some elevation dependent analysis, but we've been focused right now on being able to take those large chunks of the visual field where we have good representation of the different brightness levels. George is asking a great talk. Thank you. Maybe I missed it, but which if not all aquatic landscapes, you use to estimate the natural statistics. Did you separate between different types of vegetation, for example, right now that's just samples from I think Todd is at the seven or eight different sites, only in India. Yeah, so those are, we had nine sites, and I believe, you know, some of those sites were really turbid. So I don't think they were quite useful for your analysis but Well, I think we wouldn't have eliminated anything on the basis of turbidity because that would be an important scene statistic that we would want to capture, but I think there is one that we eliminated because the camera wasn't stabilized. So that might be why we have eight instead of nine. Yes, there were. Yeah, in stationary videos in very fast moving streams were problematic, we would have roads and try to try to stabilize the whole thing but yeah, depending on the site. Yeah, it was. Yeah, I think we were able to get samples from most of the different sites, maybe we just missed one of them. And I haven't separated them out by site to see if the different sites have statistical differences between them. One thing that we are working on now is categorizing them on a continuum of being more or less Ripley because some of the sites had more caustics and changing shadows and so we're working on analysis to see if we can categorize those based on features of this facial temporal power spectrum, but we haven't tried to separate into environments with more or less vegetation or more or less turbidity specifically. Yeah, okay. Great. We have some time for some questions so I have a tonkin burns. asking that great pair of talks does moving the camera close track or turbulence in the water that changes statistics of motion in the visual scene. That one. And maybe Emily has a comment too but yeah so the camera was on a slender a boom of carbon fiber as we could, we could get that would be rigid enough. We ran into issues when we did a vertical movement so when we pulled the camera out of the water and then the camera case would be there so that would create create rippling that would be circular and outward from that camera being pulled out. Those were the major ones. Of course, there could be slight, you know, slight effects of the higher speeds. But typically, the caustics that were produced were dominated by the wind or different turbulences within the water so thankfully I think that those those are minimal except for the case when the camera was actually pulled completely out of the water. So, yeah. But yeah, great, great question. Great. I have one more. Timothy Lee is asking from your field work. Can you speculate why you get UV sensitivity would be important for the British. So, I would say, I mean, based on all of the work from Tom's lab, you know pointing to pre capture being sort of clear behavior that UV cones would be important for from our field work we weren't focused on larvae you know we may have found a larvae and it was swimming near the surface where UV would be quite prominent, quite prominent signal, but I don't think anything specifically. We can say about, you know, the role of UV and behaviors that the camera wasn't particularly sensitive to UV. You know, we thought about bringing a UV camera that we could put underwater but technical limitations and how to house such a thing and make such a thing. We could lead to Tom potentially. But yeah, so unfortunately, I don't think we have a whole lot of information to provide there. Thank you. Great. Thank you for all the questions that we've got so far. So, as we have posted on the YouTube chat. We have the link for this zoom talk so everybody is welcome to join us and have a little bit more questions and more casual talk with our with Todd and Emily. So please, everybody, you're welcome to join and thank you for everybody who who has been watching our live stream. There's a live stream going on for a couple more minutes so that people have time to grab the zoom link and join if if they wish to do so. Thank you. We already have people joining in. Thanks. Thanks. Yeah, thank you. I think I'm just going to keep asking the questions because I still have Tom asking another one that your flow field analysis suggests that for translation the ground has the biggest motion signals, while for rotation the whole visual field seems to contain usable motion cues is that consistent. And can you comment how think links to our own more OCR dependence on visual elevation. I can take the first part maybe Todd can take the second part, we do expect that to be consistent. We're still working on extracting the frames from getting, you know, a nice labeled set of translation and a labeled set of rotation so we can do some good averaging and alignment of the different environments and see what trends are common across all of the samples in our data set. And based simply on a geometric analysis of, you know, the presumed depth statistics of these environments. I think that observation is spot on. And I think, you know, Ari's here too so he could also maybe, maybe respond to the second part of the question. Yeah, I agree. I think there's also for the behavior of a difference that we have seen which is partially published the other part, not yet but it's on bio archive that basically for the optokinetic response. They look rather to the sides like that's like where the response is strongest, whereas for the up to motor responses to papers now that have shown that it's coming from the from the back from below. So that would actually match very nicely to the motion statistics if that's what we find what we just said. Why from below makes a lot of sense. Why from the back. I mean you've shown that it does that but what's the point. Well, one idea is that you want to swim away from the from things that approach use it might be more important to respond to something that is coming towards you and things that go away from you maybe it's less of a of a strong driver for behavior. And they also in in Ruben's analysis where he found this very sharp cut off between light and dark as being the trigger in Ari's work, you know they found that lower temporal field but it's slightly different location but I think it fits quite well with it being you know very close to to the eyes already I don't, I don't know if you would disagree but it's certainly very close to the eyes. Yeah, that's for sure. Yeah, go ahead. As I said we've only tested from, and also the Portuguese lab motion coming from from backwards and then it's like strongest in the lower temporal field so would be very interesting. And we want to look at that like how it's actually for different directions as well to see what the organization there is but that's like, they only care about things that are in the back, or that's more like also dependent on the motion direction. Thanks. And with larvae swimming you know presumably at the surface, mostly although we don't know that per se, but you know having these cues I mean larvae are in potentially this much water maybe deeper. How much of, how much of a signal are they able to get from below. And also, you know, we don't know if, if larvae even are in moving water, right, they, you know this this optomotor response could be something that's there and really comes into play, ethologically later. You know it's, we just don't know right, we really don't know what it's used for. It would be amazing if they don't get swept by water movements in their early life right I mean that's so tiny. Yeah yeah I mean I think they, they certainly would I mean in these rice paddy locations as you've seen. Yeah there's wind there's things that could push them around but in terms of currents and things. Yeah I don't know, but I think there should be like a larvae expedition. Yeah, this would be cool. Let us know when you find them that will come. The time of year the location yet. I mean we're. Yeah. We need to have some kind of a light that would just illuminate them right. Yeah. Who else is here is that Joe. Duncan. Duncan. Hi Todd. How are you? I thought you were Joe for a second. No. Long hair. Yeah that's just like not to have got a haircut in a while. Good to see you. Good to see you Todd. Yeah Miguel's here too. That's Miguel. Oh cool. So how far are you from imaging those cichlids? Oh, so we have to learn how to get.