 I don't know if you can set this up. We're going to switch gears to the other segment of the eye. Thank you. I'm going to change shift gears a little bit about some of the basic science research that I've done here in the Morana Center for the last several years. Retinal geography is a topic that I've been working on the day that I interviewed in the lab that I ended up doing my PhD in. My background originally was in astrophysics, mathematics and physics, I moved in and worked as a programmer in geospatial imaging, basically satellite imaging and things like that. So when I came into the Morana Eye Center and started studying the retina, I essentially was very interested in taking some of those sort of modes of research and applying them to biological systems and developing technologies to do that. What I'm going to show you right now is a little bit about some of the motivations for why we would want to study that. So first off, I'm going to talk about what I mean when I use the term retinal geography. There are some other terms that get thrown around in the basic science community to essentially mean the same thing. I'm going to talk a little bit about how the evolution of the vertebrate retina has set us up about the problems associated with some of the strengths and weaknesses of the geography of the retinal. I'm going to talk to you a little bit about my current work studying retinal geography and how it relates to human diseases. So geography means by itself essentially making pictures of the Earth. What the term has come to mean subsequently is about the relative arrangement of features within a structure. The term geography has come to have a more general meaning of essentially understanding the geometry and orientation of different structures within an entity. So when I say retinal geography, what I'm largely talking about is how the retinal features and structures differ in the center versus the periphery. At least that's the primary organization we're interested in in humans. In a number of other species, the interesting geographic axes aren't really central to peripheral. But in humans, that's the primary one that we look at. The eye has evolved at least twice that we know of, possibly three times. The first eye evolved during the Canberra half a billion years ago, and our compound eyes, where the light-sensitive portion of the eye is on the outside of the globe, and every discrete region of photosensitive material has its own image-forming lens on top of it. This has a couple advantages. It's really robust, so the fly can have a significant chunk of its eye damaged, and that doesn't damage the image-forming lens in the other portions of the eye. So it can sustain injuries and still retain a lot of function. The downside is that it's very space and efficient, because it's constantly reproducing the image-forming lens for every new photo-sensitive region. It ends up with a lot of lenses. It ends up with a higher burden of lens material, and so it's very space and efficient. It doesn't really show up on any animals larger than these invertebrate arthropods, but it does still persist in these animals to this day. The other type of eye evolved essentially completely independently. We refer to it as the camera. Camera eyes are shared by all jawed vertebrates, and probably evolved about 100 million years later, still sort of post-cambrian explosion, where we have a single lens that produces the image for all of the photosensitive regions of the eye at the same time. So this has an efficiency benefit in that we're able to only build a single lens. We're able to duplicate the lens over the whole surface. And it also allows for higher resolution. The reason it allows for higher resolution is because with a single lens, we can now cram all of our photo-sensitive photoreceptors into extremely high densities on the image that's being produced by that lens. So it converts to us some significant advantages. It is, however, basically an inside-out version of the compound eye. So with the compound eye, the interior surface of the convex surface of the sphere, the camera eye places the photosensitive regions on the concave, the interior surface of the sphere, and that carries certain problems that we're going to talk about. Just an interesting side note. These little guys, the hagfish, was a very interesting discovery that explained a lot to us about the evolution of camera eyes. Basically thought that the process evolved as an outgrowth of the pineal gland. Light-sensitive tissue from the pineal gland moved towards the surface of the animal where the surface ectoderm became translucent to allow light to pass through. It's thought that this is useful for a pineal gland because the pineal gland is involved in regulating circadian rhythms. So there's a significant advantage associated to knowing where the light is and when the light is. But as that surface became a little more clear, specializations of the surface ectoderm have the formation of a lens, which takes us from something which in the hagfish is thought to really only confer a sensation of light levels and possibly light direction, but no image-forming vision to something all the way up to a human that can form an image based upon that same process. This has been a very interesting saga that's been hotly debated for the last 100 years and has been one of the central topics that has been used as basically an argument against the evolution theory. But at any rate, when a human evolves embryologically, we basically first become a hagfish and then we become a lamprey and then we become a human. And so... And of this entire half a billion years of evolution, I'd like to point out just for perspective, human beings have existed for 200 years. So when you're looking through this lit lamp, you're looking more at a dinosaur than you are a human in a lot of ways. Some of the consequences of the way our camera eye evolved basically boiled down to that convex surface versus the concave surface problem. Our retina is essentially upside down. So our photoreceptors are at the far, far posterior edge of our retina. And the light that forms the image on our photoreceptors has to pass through the entire retina and all of its hardware and associated structures before hitting the photoreceptors. And that's obviously a problematic design because you want the image to be in focus. You don't want it to be... You don't want the photons as they pass through the retina to be diffracted and scattered because that's going to defocus the image. But the light still has to pass through blood vessels and it has to pass through the entire processing neural retina before it gets to our photoreceptors. So a lot of very specific biological adaptations have had to take place to allow that to happen without defocusing the image. Overall, a couple important things for what we're going to talk about. We do have two blood supplies for the retina in a camera eye. The first blood supply that we're the most familiar with in ophthalmology because we see it all the time is the retinal vasculature where the central artery comes up with the optic nerve and then fans out on the interior surface of the retina and penetrates just a little bit into the retina, not all the way. The second vascular bed is the coroid vascular vascular which turns into the coriocapolaris. That underlies the retinal pigmented epithelium at the backside. They don't come all the way to each other. The centering sensory retina, so the photoreceptors predominantly, don't have blood vessels near them. They basically receive bidirectional diffusion from both sides of oxygen and nutrients, both from the coriocapolaris and from the retinal vasculature. This becomes pretty important when we look at the human retina. So in the human retina, we have predominantly cones in our fovea. So on this bottom axis, what retinal geographers refer to as retinal eccentricity. So when we say zero degrees eccentricity, we're talking about the fovea. When we say 60 some odd degrees eccentricity, we're talking about the auricirata and I don't know why this graph goes to negative 80 because your retina certainly doesn't. But we have predominantly cones in the fovea and rods are actually physically excluded. The rods however then very quickly reach their peak density just outside and then the overall photoreceptor density decreases as you move out towards the periphery. So you very quickly get into a rod-dominated region. It's important I think at this point to point out that we basically use our rods. Most of the evolutionary history of our retina, especially of the neural retina and the processing pathways through the ganglion cell, the american cell, bipolar cell layers, is set up to process rod-based vision and we essentially don't use any of it anymore. Even when you're lost in the woods at night, you're still using a predominantly cone-based vision. Human beings are terrible at what we call scotopic vision, which is rod-based vision. We're almost exclusively photopic animals and so rods in some way are not particularly useful for most of our vision. One of the most interesting and also most problematic adaptations, in order to cram all of these photoreceptors in to this central region... Yeah. No, a lot of it has to do with the recycling, the shared recycling of pigments between the rods and the cones through the RPE. And in fact retinitis pigmentosa, for example, even though the disease predominantly affects rods, it's actually the effect on cones that causes the... It's sort of the side effect on cones that causes the more significant amount of visual impairment. So things that affect our rods affect our cones, but we very rarely are in a rod-based visual world anymore, especially in modern society. We probably evolutionarily had periods of our time where we used a combination of rod and cone-based vision, but it's actually really interesting. The neural retina, the vast majority of the circuitry in your neural retina is there for rod-based vision. It's not there for cone-based vision, and we don't use the vast majority of it anymore. In fact, in humans compared to other animals, it really doesn't work very well anymore either. In order to make room for all of these cones, we've had to make certain adaptations, and the primary one is that the fovea has been excluded of retinal vasculature. So the only vasculature that's left in the fovea is actually that coroidal vasculature from the posterior surface, and this becomes really important. So a few retinal diseases that have known geographies. Retinitis pigmentosa. So RP is a disease, it's a lot of diseases, that are typically characterized by genetic mutations and part of the phototransduction cascade in rod photoreceptors. But most of your visual loss happens by what we call bystander destruction of cones. The rods die. The immune system comes in to clean up the rod debris in the process the cones get destroyed and you lose your scotopic and photopic vision. Glaucoma because of the structure of the lamina crebrosa where the axons of the ganglion cells path through a physical constriction point, that allows us to have, excuse me, so in RP that's why we lose our vision from the periphery out so we lose it where the rods are located. In glaucoma the reason we lose entire chunks, entire contiguous wedges of the visual field is because of the structure of the lamina crebrosa. So these are good examples of diseases where we have a good understanding of what geography looks the way it does in these diseases. Macular degeneration and its associated subtypes we don't have a good model of. This was the problem that I was presented with when I started in my lab that I found the most interesting and this is what I really wanted to study. I was very curious as to what it was about the anatomy and physiology of the macula caused it to degenerate relative to the other regions. I was more interested in studying the geographic aspects of it than the molecular aspects that I dedicated my PhD time to. I developed what we ended up naming the RET space analysis system that's going to be distributed here by the Moran Eye Center. The basic idea is that it allows you to measure and profile, generate geographic profiles of essentially any molecular anatomic or pathologic feature of a post-mortem human retina that you want to and then build mathematical models and descriptions of how those work and I'm going to show you some examples of those. I take a strip of retina so these donor eyes come from the lines I make. I take a strip of retina from the optic nerve head through the phobia and out to the auricirata. This ends up being a strip about two centimeters long. It's processed in an electron microscopy fashion, broken into little chunks, assembled into little blocks section that ultra-thin 200 nanometer sections and imaged simultaneously with about 15 different immunohistochemical markers. All on sections that are so thin that they pass through the same cell about 50 times. So you can then take those images and co-register them on top of each other and effectively generate a 15-channel two-dimensional image as opposed to confocal microscopy where you're lucky to get three. We can generate 15-channel molecular imagery that spans the entire eccentricity range of the nasal visual field, the temporal retina, which is your predominantly binocular vision. So each retina ends up generating somewhere in the neighborhood of 100 to 300 gigabytes of imagery. So an entire hard drive filled up with the imagery from a single strip of one retina. What I'm going to do now is basically skip four years ahead and just show you the pre-pictures that the system spits out at the other end. So these are some profiles of a series of different donor retinas and what I'm going to show you is basically a series where the blue line is always the 74-year-old with known macular degeneration ranging up through the red, which is a late 70s female who had severe wet macular degeneration. What I'm going to show you is basically a series of different measurements. So each one of these graphs is going to go from the optic nerve head here at about negative 15 degrees eccentricity up to the aura serata, the neighborhood of 60 or 70 degrees eccentricity passing through hopefully the macular. As you can see here, I really hit the phobia with the green slice. The blue one, I missed it all together. I missed the phobia because I don't get the cone spike. And in the red and orange ones, I came pretty close to the phobia. The true phobia centralis is less than a millimeter across. It's kind of hard to hit. I can tell people that when I'm pointing out that I missed it. You can see here in the photoreceptor density that I didn't miss by much in the blue one. I've kind of grazed that foveal pit. You can see this is total photoreceptor density. But in these others, in the green and orange ones, you can see what we more classically expect to see, where we get a significant dip in photoreceptor densities here. And interestingly, photoreceptor is completely gone in my severe AMD patient. This is totally obliterated retina. That very quickly, however, returns to pretty much normal photoreceptor densities. As soon as you get to about 20 degrees off axis. Nobody ever really characterized what Brooks Membrane's thickness did as a function of eccentricity. So that was one of the first things I did. And we found something that we weren't expecting. We have, nobody, what people had done previously is they looked here and here and just taken two measurements from each eye. Nobody ever really looked at it this way before. And so it was always thought that actually Brooks Membrane was thicker in the macula than it was in surrounding structures in some way just because here is higher than here. But it turns out that the thickness of Brooks Membrane actually represents a depression in some ways relative to the surrounding retina at plus and minus about 15 degrees. And if you'll notice the patients that have wet macular degeneration have the thinnest of Brooks Membrane's. Which is, since the blood vessels in wet macular degeneration have to penetrate this layer. This shows the RPE thickness indicate where neovascularizations have taken place. We've also interestingly found a lot of cool pathologies going on in the far periphery of the retina where nobody'd ever looked before that actually mirror a lot of what we're seeing going on in the macula. You can see we have pretty normal RPE thicknesses in all of our patients here until we get to this late wet macular degeneration where the RPE is completely destroyed and we're missing a whole bunch of our RPE there. And the Coriocapolaris density then also in that same patient is dramatically decreased. There's been a lot of argument in the RPE Coriocapolaris world about macular degeneration which is the chicken and which is the egg here. The RPE produces the VEGF that supports the Coriocapolaris Coriocapolaris brings in the oxygen and the sugar that feeds the RPE. And so it's kind of difficult to say who would die first in this process. Though I think our work and the work of some other labs in California are now starting to support the idea that the Coriocapolaris dies second. It's not a vascular disease of the Coriocapolaris that kills the RPE. It's the atrophy of the RPE that takes away the vascular endothelial growth factor and in turn leads to the atrophy of the Coriocapolaris and I think we can finally put that to rest with this technology. This is a picture, I can explain to you how I generated it later. What you're looking at here is basically a massive confluent drusen that has separated. The drusen itself has been removed by my processing basically the de-placidization process but you can see the shadow of what the drusen was. Here's your RPE photoreceptors and you can see by the thickness of the neural retina that we're in the fovea. And in this section we found something really curious. So this red line shows the displacement of the RPE above Brooks membrane and the blue dots show the concentrations of glutathione and the overlying cones. Glutathione is the most important buffer of your redox potential inside any kind of neuron and really inside most cells. Some cells use citrate but glutathione is overwhelmingly your most powerful intracellular antioxidant. And it's dramatically elevated in the cones on top of this lesion. So it's a little curious because cones can't make glutathione. They lack glutathione synthetase. They lack a lot of things actually. Pretty much the RPE is there to babysit photoreceptors and do all of the various metabolic functions that they lack. But it's still very curious where did this glutathione come from because these cells can't make it. So I started looking and profiling the glutathione concentrations of neighboring cells and lo and behold it turns out they're strictly lacking. So we thought maybe the RPE cells in the presence of this injury found some mechanism to transport their glutathione up to the photoreceptors. That would at least provide a mechanism to explain this but nobody's ever reported that before. Quick PubMed search found that in point of fact photoreceptors or excuse me retinal pigment epithelial cells have been known and characterized several times to have bidirectional glutathione transport capacities. It is an adaptive or protective mechanism in the retina before but it's now because of this observation something we're pursuing in the laboratory is a potential mechanism through which the disease progression could be slowed. Future work of course I always want more data. My current processing involves only me working on stuff and it takes approximately one to two months to process a single human retina. What I am working on at the moment and I'm working on a little bit this fall is helping better computer assisted annotation of my images to increase my throughput. Of course hiring a bunch of undergrads would be very helpful as well but that requires a lot of money that I don't have. I call that undergrad assisted annotation and then I could potentially theoretically just based on technological limitations I can get this down to a weak retina. I'm also building a website to distribute the code which is all open source collaborating with some of the other schools and also want to be able to distribute the data in a sort of Google maps kind of way where individuals would be able to zoom in and out of these data sets and be able to visualize different functions of the retina. I'd just like to thank everybody in the Maranais the neuroscience program and everyone in my lab. I'll take any questions. So basically that's where the time consuming part is. We go through, the glutathione antibody and so we have a stain across the entire retina of the glutathione and then that section is imaged at the full resolution of light microscopy basically. And so we go through and you can actually identify either computationally or manually every single cell in the entire retina. And then you can analyze quantitatively the amount of the signal of the glutathione inside that individual cell. So each of those dots that I was showing you is actually a corrected value for the approximate molecular concentration of glutathione inside individual, so each one of these is an individual cone outer segment. And so that's why the database that's generated all this information is gigantic, which is really what I really wanted in the first place. All I ever really wanted was the database so I could sit there and write mathematical stuff to dig through it and mine the database and find cool stuff. It took me four years to generate the database and then I got to have fun for only six months of mining the database before they made me go back to medical school. Thank you.