 All right, there was one last slide in the background before I got to the models that I've forgotten. Just addressing the question of plasticity and role of the environment. This is an animal. This is just something I referred to earlier. If you inject the radioactive tracer into one eye in a control animal, if you inject it in one eye and leave the other eye intact, this is the pattern you get. Some areas are radioactively densely labeled and they alternate. This is the underlying ocular dominance map. This is a very large area. That's only one millimeter, so this is a very large area of macaque V1. Usually we're looking at an area like this or maybe like that. This is a control animal and this is what happens when you monocularly enucleate, you chop up his eyeball and you leave the remaining eyeball. In that case, I guess light must indicate uptake of tracer because the remaining eye is where the tracer must have been. The area of the cortex devoted to the remaining eye has greatly increased in this animal. You can do it in lots of animals. You can systematically show the ones with a nucleation end up like this. The control animals end up like this. This shows both the power and the limits of reorganization. You have reorganization on this scale such that individual stripes increase in width, but it hasn't taken over the entire cortex. There are plenty of areas of the cortex that still don't respond at this stage anyway. Eventually maybe they would eventually become responsive. Does that make sense? Basically. You can do the same thing. That's a very old technique using physical manipulations. You can also raise them in different environments and so on. This is harder to get right, but more function and irrelevant. If you do that, you can show responses to different orientations. Again, the same pattern. You have the basic map structure doesn't change. The orientation that's overrepresented in your world becomes overrepresented in your maps. If you do this in young animals, you see good results like this. If you do it in older animals, it becomes less and less visible the results of changing the environment in this way. That's background. Models. What do you need to do to do a map scale model? You start with a box scenario diagram like any model. What are your boxes and arrows in this case? They're not neurons. Although there are neurons in there, you start with interconnecting multiple maps usually. If you're just just studying V1 by itself, it's hard to do much with it because you need inputs and outputs. Luckily, the visual system has a relatively straightforward pathway from input to output. If you were to do the same type of architecture for the auditory system, there would be 20 different stages between the ears and A1. This is why modeling the auditory system is a mess. In V1, there's just photoreceptors. Then some processing that we don't care about because we're trying to capture a phenomenological model of the retinal ganglion cells. Then some processing in the LGN. There's one synapse. From here, they go to LGN and that's one synapse. From the LGN, you've got to V1. In V1, I've depicted V1 as one box here. By one box, I mean one topographically arranged set of cells with homogenous properties in some sense. They don't all prefer the same orientation, but they all prefer some orientation, let's say. Real V1 will have many such sets of cells. There are a bunch of excitatory cells of a particular type, a bunch of inhibitory cells of a particular type. In an area, you'll have multiple cell classes at a particular lamina. So at one laminum, I'm not Latin. One lamina, multiple laminae. At one lamina, you'll find a certain set of cell classes, but if you look at a different lamina, you'll find other cell classes. In particular, the plots that I showed of these, these neurons that have very long-range connections, those tend to be in layer 2-3, which is in many areas, they're separate layers, 2s and 3s, but in the visual cortex, 2 and 3 are almost indistinguishable. It's just one layer. It's called layer 2-3. Neurons in that layer are the ones that have these very long-range patchy connections. If you tried to do the same injection that you were targeting neurons in layer 4, the layer that gets direct salamic input, you'd see much shorter connectivity and it'd be hard to see any patches because it'd be local. So the layer matters. The cell type matters. In the simplest version of such a model, we ignore all of that and we put it into just v1. And I will almost only talk about the simplest version of the model because there's no way I could get... We have an hour right now. There's no way we can do much. In the tutorial today, again, you'll look at the simplest version. One set of cells in v1, not even distinguishing between excitatory inhibitory cells, not distinguishing between layers, not distinguishing between anything else, collapsing it all, and why would that be at all relevant to do? Because we only get one set of data from our optical imaging experiment. We only get the surface of the cortex in its organization. That's all we have. So you don't have a whole lot to go on if that's all you have. You need to bring in lots of different sources of evidence from the images between laminae and cell types and such. And you can do that, and we do that in other studies, but the first pass, if you just want to say, okay, I've got a map model, is it working at all correctly? You can do that with only a single map because your fundamental data is only a single sheet of orientation preference, say. And how many neurons are in this map? Well, you need to... We really start out not talking about neurons. There will be neurons. We will get to neurons from neurons to maps, from networks to maps, but you start out with a map. You say, okay, I need to simulate a certain number of square millimeters because those will be the ones that will correspond to something real in the world. If you look at only the tiniest little bit and look at their neural representation, yeah, you could see that a very small patch of V1 might correspond to it. But that won't be behaviorally relevant. If you want something that you could train the animal to respond to reliably, you're going to want some several millimeters squared of area. And so you start with that area, and then you basically throw as many neurons into that box as you can as you have time for and as much memories you've got on your system. And you hope that by doing so, because everything is smoothly organized, hopefully it will still work. And the only justification for it is that it does work. Otherwise it's a completely ridiculous thing to do. But it works, so it's good. So why would you do this? Well, I want to understand where these maps come from. We talked about this in general, what you might want to... Earlier I had a slide about research questions having to do with maps. Those are for experimental people, computational people, whatever. What are the experimental questions you can address specifically using computation? And that's a much smaller subset. What are the things you can actually run on your computer and decide? Well, you can look at computation. Computers are really good for that. You can look at development. That's really hard to do in the lab. It's very hard to collect data from the same animal over time. It's almost never done. It's at the map level. As I said, it's been done... It's definitely been done once. I can't think... There are these other studies like this that are occasional, but these are just little single data points, little bits. They don't show the process. They don't show what's really going on. It's very hard to get data, but it's very easy to just run things on a computer and you can try to see at least do you have reasonable starting point and reasonable final results? That's a start. It's more than we know from a lot of things. So you can do a lot of things on a computer that are actually a lot more difficult in the lab. And if you can do some things that link development and adult function, then you can start to do what I want to do, which is explain why the cortex is as it is. Why should it be like this? If you want to address why I think you have to get a development, how did it come to be that it was like this? Why didn't it become something else? And that's why I think development is extremely important. Once you're done, you can use these models and you can actually run simulated psychophysical experiments. The psychophysical experiment is you can do it a human. You just present something and you have the human press a button. Fundamentally, that's a psychophysical experiment. You present some pattern and they can either distinguish it, you present a line like this and a line like that. Can they tell you which one is tilted more than the right? Eventually, they can't. You can run lots of psychophysical experiments on that and you can uncover various illusions and after effects that way. You can run those same things on your models subject to certain assumptions about how the data is read out. And you can basically connect development to adult function to adult misfunction, which is often more illuminating. And then basically whenever you have an experiment that's done at the map level, and in my opinion many experiments should be done at the map level, but only certain ones are. Only certain things are practical to do at the map level. When you have that data, you can try to replicate it on a computer and you can try to predict the response to things that haven't been done yet. You can try to understand what might be explaining what hasn't done. What you can't do is prove anything about the actual physical system. Surely hopefully nobody is going to try to do that with your computer so you can't do it. You can't prove anything about the system. You can only prove things about what you've put into your model and these are all, but you can't address a lot of questions that way and you can show that it is possible to do it the way that you're doing on the computer. That's what you can show. You can show that existence proof and you can make predictions, you can test and test those but you can't demonstrate anything about the underlying system. To do that you have to mess with the underlying system. Okay, so as I mentioned, if you wanted to do a map model and I think from what I saw, what you guys were looking at a lot of you aren't thinking at the map level but some of the questions you're addressing you're looking at a very small patch of things but you're thinking in terms of how that relates to the overall organism. Now if you wanted to model that just in terms of a few connected neurons you have somebody who measures data about those neurons and you try to replicate those in your model, that's fine. That's modeling at one level. But what if you wanted to embed that in a larger map model? That's what I'm getting at in this talk. You may or may not ever want to do that but if you do you would start with typically a cortical area although lots of sub-cortical areas have maps to many of them maybe not most of them but certainly many of them do in terms of a systematic organization of some sort. But if you picked some area you wouldn't start with, I've got a channel now I build it up and I get a patch of membrane and I build that up and I get a whole neuron and then maybe add another neuron and another neuron. Is that going to work? Are you going to get to a map? Not in your lifetime. You're not going to have the data constrain all of those steps. You've got to do the opposite. You've got to start at the map level and then maybe hopefully fill in a little bit that you understand really well and try to link that into this overall map level maybe have a course overall model and a really fine one you try to make a link between your really fine one and the course one. That's perfectly feasible and that's in the spirit of this summer school. So basically you add overall detail and maybe sometimes you add very focused little bits of extreme detail. You add whatever details you need to study some particular phenomenon and that depends on whatever type of data you have and then how much detail you add is limited by how big your computers are how long you can wait for them how smart you are this is almost always the fundamental thing basically how complicated a system can you ever figure out and ever make it work and ever know what it's doing that fundamentally is mainly the limitation and that's tied into how much data you have because if you have a ton of data that constrains everything you don't necessarily have to understand as much you say oh I don't know the answer I just put it right in but that's often completely impossible because the data is almost never that clear and clean and obvious and straightforward you still have to understand it you still have to really know what's going on and that gets you to this one so often you're very constrained on the amount of detail even no matter how fast your computer is no matter how big it is so you put that in then you validate your model on whatever data you have in my opinion everything I do is tries to bring in as many different sources of data completely different data from different labs of different types psychophysical data on humans bring it in there genetic manipulations on mice bring it in there I try to bring everything in there under the assumption that the basic architecture of the cortex is very similar and that fundamentally it's doing similar computations and if that's true I can make it all work obviously that's a whole lot of assumptions but I think that that's my own approach other people will be much more narrow about the sort of data they care about anyway you validate that model on whatever you've got and then try to make predictions and repeat as necessary okay let's go do that no no you need more than that you need to have some idea what it means to model a map okay so let's say you've picked your area and you've got your experimental data ready and you know how much detail about you can handle so what do you do first thing you do is a grid usually a mesh or an array whatever you want to think about it you have a nearly infinite number of neurons and you just sample it according to some regular arrangement either a Cartesian grid or a hex grid usually if you have a dense enough grid it doesn't really matter in most cases but the typical way is a Cartesian grid because computers that works nicely on computers and then what's your grid element well it's either if you have what's called a mean field model your grid element is not a neuron your grid element is a representation of the average properties of that region now that hurts people's heads unless you're a mathematician mathematicians love these models other people just say okay I know this is wrong but what I have in a grid that's a neuron now sure the real system has 10 billion neurons and I've got 112 but I'm going to make it work each one of those is the neuron I'm going to fudge it somehow and that's a much more typical way to do it and so what do you make at each little point if you're not doing a mean field model you make a neuron so what kind of neuron do you do the typical thing you do is a point neuron which is ridiculous compared to some of the stuff you've done earlier in the course the entire neuron is just a single point in space so this whole patch of neurons that has maybe 10,000 neurons in it you've replaced it by one neuron clearly insane thing to do unless it works and moreover that point neuron doesn't even have spikes usually in these types of models it has only a firing rate it's got one number associated with it that is instantaneous state it is either firing at 0 of it 0 or at 100% or somewhere in the middle that's all it does that's the typical standard thing to do in this type of model now as I said you can always take one little bit and replace it with anything arbitrarily much more complicated fine, right but if you want to do that to the map level it's very hard to do anything but something like this the second most common is integrate and fire neurons those are also typically point neurons but they are they spike so I presume these have come up already okay there are plenty of map models that use integrate and fire neurons there are a few map models that have multiple that are not point neurons that have multiple compartments within each one the ones that do tend to be almost as ridiculous as point neurons so I don't know why there's any point they'll have a soma and then an axon and maybe three dendrites or some smaller processes somewhere and there are so many reasons for that one would be that let's say you've got some beautiful smart model thousands of compartments you've got it here okay now you need a million more of these do you have them no you might have one from one animal you might have 10 from one animal but you don't have enough to build a map from one animal ever so you've got some other ones you can cram them together this is called the blue brain project which is to measure a whole bunch of mice take the similar spot in each one and measure a lot of neurons and then cram them all together even though why is a neuron shaped like that it's because there's a neuron here and neuron here and neuron here and neuron here their shape is because of what they're connecting to so when you cram them together they just sort of morph them they've got complicated rules for morphing and bending them into the right shape it's largely made up if you're going to make up stuff just don't even simulate it just say oh I made it up and it didn't matter basically if you don't have enough data at any point in running your model so these models are driven by the fact that you need to simulate a ton of neurons and you don't have the data you just don't so you need to do a stupid model because you don't have any data to drive a good model and as I said the only justification for that is that it works if it didn't work everyone would just laugh at you well people still will laugh at you but they'll laugh at you while your results actually work and they actually explain things if that's true and it's only going to be true if you have these smooth orientation maps where you can represent a thousand neurons by one and it still works that's only a fact of them having these smooth organized maps otherwise there would be no point so anyway it does work to do this depending on your question it could work to do this but nobody's ever successfully built a model like that people are trying people are spending amazing amounts of money there are at least five different European projects each involving six different partners and they're all putting all their money into this work meanwhile these models work today synapses oh you could have a whole supercomputer devoted to a synapse if you wanted to but here synapses are one number the strength, the weight this is a connectionist type approach popular in the 1980s and even in the 50s two neurons connect there's a number that's the strength that's it you can even do this not even on a computer with a fancy mathematical brain you can write down equations and represent everything without even simulating in terms of a discrete grid but that's it requires the occasional genius I don't even know how their brains work so if you want to see there are much later reviews than this this is taken from an old slide so these are good reviews of map models but they're more recent ones as well anyway this, so typically this is, so what we'll focus on is that of course the typical and simple case what are examples of such models and I'm only focusing really on developmental models because well because other types of models don't matter in my opinion but that's probably a little too too glib it's just that basically what is it you want to explain and if you try to explain why a patch of cortex becomes Thursday 1130 it always happens then fires and gulf us never so basically the developmental models will try to explain why a patch of cortex becomes a visual cortex becomes an orientation map and I think those are the ones that are interesting at the map level because you can't really address those questions except at the map level so of those are you part of a self-organizing map oh it's interesting because over time these models used to be extremely popular for data analysis and visualization they were originally inspired by the brain but they kind of went off and they're becoming less popular over the years so that it's interesting to see almost nobody currently familiar with them that's actually good there are basically the type of model I'll focus on is a very simple idea of just heavy and learning when you have two connections what sets the strength of that it'll be the over time is this active and this active at the same time then it'll be a strong strength and if not not weak strength so I'll focus specifically on models like that there are also models that try to go from information theory or from very high level principles to say this is what a neuron should be doing it should be organizing if you're in the visual system it should be representing the visual world in this way and we can derive that mathematically and look if we do that we see things that look like the brain so those are models of this type and there are a lot, there are tons of models and the thing about high level as you go away from the low level there becomes less agreement between people because there are many possible high level interpretations of the same underlying phenomena well everyone agrees, oh Hudson Huxley explains very well the very small things but what about what it all means what it all comes together this is all just different churches different religions no one agrees on that stuff and the same here if you try to people don't agree at the map level on a lot of on much they agree they don't agree whether it's fundamentally activity driven I will all present activity driven there are people who argue that it's all fundamentally just the way it's wired hard wired they don't agree on what drives it whether it's external world or not they don't agree on whether there's an objective to it or whether it just happens whether it's tide division whether it's not all the fundamental basic questions basically there's not agreement on and some of them particularly this one say is very well motivated from the machine learning community it's very hard to make a relationship between that and actually what happens in the brain because it relies on things like negative activations if you square a negative activation what do you mean if you square a negative activity what are you even talking about and then some of the people try to come up with a way that that makes sense but they're starting from information theory they're starting from machine learning and starting from things that really make sense mathematically and trying to map those onto the brain the CBL are starting with the brain and trying to make that make sense mathematically does that make sense anyway I'll give a sample model which is the most important model in the world because it's a model I use so clearly it must be it's also the one I can be most authoritative about because I know exactly how it works and everything it does that does not make it the most important model it's just a model so this particular model is of the structure the overall structure I'll focus only on this part here from photoreceptors to here the retinal ganglion cells in the LGN have been collapsed into one layer because in an anesthetized animal it's hard to show a difference between the ganglion cells and the thalamus cells in their responses and the unanesthetized animal big differences but in the anesthetized animal which is basically what we call an open loop configuration it goes here and then that's it it doesn't go up and then go in these reverberating loops with complicated dynamics hopefully it's trying to minimize that depending on the anesthesia chosen so under those conditions you can just model basically all we're doing is we're saying we have some images and we have a transformation of the images going to be one which is a very simple model of one cells and off cells this is actually very well agreed on the difference of Gaussians model this capture is a very good percentage of what the actual cells do at the LGN level and then we try to explain what happens in V1 so basically all of this is fixed everything coming in if you're a neuron in LGN you will always have the same difference of Gaussians receptive field you'll always respond in the same way and your response is actually normalized by your neighbors so there's connectivity at the LGN that I won't talk about but basically your response is then given to V1 if you're a neuron in V1 you get input from a certain part of the LGN if you're a neuron over here in V1 you get input from here in the LGN these tiny little things here that's an activity bubble that's about one millimeter here so this is a huge model of V1 here this is a imaginary model of V1 as if it were all phobia as if it were all central vision this makes things so much nicer just to not have the difference between things in the middle and things in the outside it's not realistic but it's actually you can look at it and immediately you can see the pattern or activity you can see how that relates to here if we put it in the complicated mapping that's true you wouldn't see anything so this is an idealized one that's much easier to learn from and understand so basically we have an image you have, as you can see this is an edge enhanced version of the image you get a strong response this is a white to black edge so you see on cells responding as you come up to the edge and off cells responding as you go out from the edge so off cells respond here on cells respond there if you take this and add it to that you'll mostly get the same as the original image except in any constant areas we'll have no response no neuron is responding out here or only tiny little bits just from the noise and no neuron is responding over here those are both constant areas so essentially this is called edge enhancement it's also called edge detection but that's a misnomer because the detection is either true or false this is just enhanced edges become strong in this image and non-edges constant areas become weak yes? this is what they show this position is the bottom the left and if they're human look they're real, the natural scene how do you know the safety point the fixation point yes the fixation point is always changing this is at any instant this is whatever is on the photoreceptors so if you move where your eyes are you'll get a different pattern on the photoreceptors it doesn't matter where you're looking there will be something there in this particular case the monkey or the person is assumed to be looking right there but had the monkey been looking anywhere else you'd have a different image here because a different set of photons would have reached the back of the eye there's a central point whether you're seeing anything or not and that's just a function of the optics of your eye there will be a point that maps to the center of your visual field in your eye I can't point to it and at that point the receptors will be more dense it'll be there and what we're doing during daily life is we're moving that point around all the time to areas where it'll be most informative but at any instant which is what this represents this is one instant at that instant there's just an image there's not some complicated scan path or anything there's just an image that's it over time the contents of this will be dizzying to watch for a human being or a macaque in free viewing in free viewing you're going you go all over the place and none of these experiments are the animals doing free viewing they get rid of that right away animals are trained to fixate or more typically they paralyze the eye muscles they can't do anything but just go and they're asleep anyway so they're just sitting there acting like a video camera as much as possible so and that's all we're trying to explain by that way we're not trying to explain behavior of animals in natural vision tasks where they're interacting with the world moving things around avoiding shadows doing things performing some tasks oh my god that's so hard to explain I don't even want to get into that these are very rare experiments to do that's called awake behaving experiments they happen they're very unusual no one does awake behaving optical imaging experiments if you do them in monkeys you can't actually get something like an orientation map because of the jitter and the vibration you can't do optical imaging in awake behaving monkeys but if you do that it's basically like taking a Gaussian kernel and blurring the whole cortex massively because the eyes are jittering the brain is jittering and it's jittering on a scale that's greater than one millimeter for your measurements and the scatter is just such that you can't make out individual orientation columns so unfortunately anything we know that's about orientation columns or or any of the other little patches in the functional maps anything we know about that is not measurable using current techniques right now today is not measurable in awake behaving animals you can measure retinotopy in awake behaving animals because the visual cortex is huge and so there are many millimeters between this area of the visual field and this area of the visual field many millimeters in a human you can do retinotopic maps for awake behaving humans but any other map anything else such as direction, orientation, color anything else is not measurable right now using current techniques and so everything I'm going to talk about at all is going to model is about anesthetized animals blindly being not blindly being force fed images while they're just sitting there doing nothing is that clear enough do you see the object they're not the oh well people don't people don't have to worry about optical imaging because the mapping between the eye doesn't change when they move their head but the mapping so the world is changing but at least all of the internal circuitry is staying the same whereas if you're an experimental it's trying to measure it anytime any movement happens anywhere you've got whatever happened in the world and whatever happened in your head to deal with and so even for awake behaving they try to lock the cortex down but you're breathing and you do stuff and it changes when you're anesthetized you can very regular breathing you actually control the level of anesthesia so that their breathing is very regular and you synchronize to the pulse so that you make sure you only measure in between heart beats technically it's a mess so humans don't have to worry about any of that they don't care because the connection is already there it is an important question how we can still make sense of our external world even though our eyes are darting around and so on but that's a question I won't address today and no one will address anytime soon so you don't have to worry about it it's an important question but it's not one that we are currently equipped to answer we have so many more fundamental questions just about how anything ever happens and that's what we ought to be working on okay okay so and then it says space selective area you can make a cheat and just say from B1 I want to get straight to vision straight to detecting faces or something you can do that here I'm talking about other work where we're looking at the influence of brainstem brainstem also the pawns actually send information to the thalamus that goes to B1 these are, I don't know why they're still on the slide those are from different studies, different time so basically we have a fixed mapping of everything feeding into B1 and the goal of these models is B1 is just an empty soup it's just a big soup of neurons and connections and by the time it's done it's supposed to look like an orientation map else you might see out of visual system we're trying to explain how a bunch of neurons and a bunch of connections could become a map could be something that does something important and useful or at least pretty okay so this particular model, all it does is have heavy learning for the connection from here to here from here to here from all of these connections so that's maybe a few dozen connections here a few dozen connections here all up to this to any B1 neuron a few dozen connections to its local neighbors and a few thousands, well hundreds of connections anyway to its more distant neighbors and all of these connections are initially random and all you're trying to do is show how this initial structure where you make sure you feed stuff into this cortex you have a simple model of B1 as you can it's completely simple and brain dead as I said, if it didn't work no one would be surprised that it didn't work but it does work and if you feed in color natural moving images you get maps for orientation well if you have two eyes you get maps for ocular dominance you get maps for direction you get maps for color you get all sorts of things out of it you get receptive fields that was what I showed you the neurons in LGN prefer center surround neurons in B1 initially don't prefer anything they would respond based on the position they just have a set of random connections so it doesn't matter what's in there when it's done it develops an oriented receptive field stronger in one eye than another stronger for one color than another and so on so that's the goal of this type of model and if we want to look at something like visual after effects or visual illusions what we do is take this activity pattern on B1 what this is showing here is a rendering of what you could measure in the cortex if you had perfect optical imaging normally optical imaging you just get noise you have to measure 10,000 times and average the same input different responses and then you can start to see responses here you dispense with that and it just always works here this is showing that these neurons are not responding there's patchy response every time there is an edge in the image and we'll show that those patches are actually orientation selective and so on oh and I was talking about after effects but this is just a pattern of activity how do you get from there to an after effect an after effect or an illusion would be if you show this pattern the subject reports oh that's vertical you show this it's not vertical but after while they're having an after effect or an illusion you show this and they say oh yes that's vertical so it's a difference in the reported pattern we can do that by measuring the orientation preference of all your neurons responding and then averaging those so let's say that you had a bunch of neurons that normally prefer vertical and they're all active you can say with high confidence that there's a vertical line but if in some sort of illusion they will falsely be activated so that basically you can make a connection from the pattern of activity in B1 to a behavioral result does that make any sense but all I'm saying is that we don't know what happens in B1 in the rest of the brain there's an assumption or a claim that basically if a neuron is a horizontal neuron here that the rest of the brain is going to be told there's a horizontal line at that location that's what it means for a horizontal selective neuron to be one to respond and so if it's falsely claiming there's a horizontal line that will be an illusion that's when you'll have an illusion okay alright so I didn't include the equations here anyway so I'll include equations when we get to our modeling tutorial because you actually have a model in front of you and I'll put up the equations but for now try to understand it in rough terms basically you've got a set of numbers here a little matrix of numbers here matrix of numbers here your neighboring activities are the matrix of numbers and you're going to compute your own activity that's what you need to do if you want to figure out your activity associated with each pixel to v1 there's a weight and you want to figure out the values of those weights that's what the equations will tell you your activity and your weight values so we'll get to the actual equations at the tutorial but meanwhile there are equations that govern the behavior of this or it wouldn't be a model it would just be magic so there are equations and when you run those equations and you feed in inputs like this this is meant to represent retinal waves so this is not inputs this is just spontaneous activity that happens before birth and all this is trying to do is have some patch that's more active than the surround and then you feed it natural images in this case they're monochrome they're tiny little patches of images we're only going to be looking at a tiny little area of v1 that's all you can afford to do in your tutorial because you don't have a supercomputer these results were supercomputer results so we have a tiny patch of an image and we feed a bunch of these and then we feed a bunch of images and that's it we just let everything else work the values are initially all the weights are initially random but governed by these equations and by the patterns you feed in the weights will change and we'll get results of some sort okay, you get results what this is this is a this is a map model called LISUM from the previous slide it's actually this projector is this projector is so blurred that you can't see the pixels but there are pixels here each one of these there might be somewhere here maybe you can see a pixel anyway this is this is 142 pixels by 142 pixels so each pixel represents one neuron and that's the color of the orientation preference of that neuron measured by presenting a bunch of sign grating sign grating is a pattern bright dark bright dark so sign grating at one orientation present lots of them and we measure the preferred response that's labeled here and you do the same thing for the monkey that's the experimental data and basically what's happening here is that neurons become similar to the neighbors they do that because they are connected to their neighbors and the connections will make them become more similar they become different from their neighbors from their more distant neighbors basically these connections here are excitatory these are inhibitory so they become more similar to their nearby neighbors and they become more different from their more distant neighbors and they're not even connected well actually the connections in this one the model simulates connections over this scale the real connections are over that scale but that takes a whole lot of memory and we don't have it so in any case you don't need to simulate these very long connections just to get maps so we have in each one of these is a neuron that has some orientation preference according to this color key and if you can like you can just say oh those look similar but you can also run a lot of different analyses there are all sorts of analyses people have suggested for this you can find what are called pinwheel centers these little spots where you get a rainbow of color around it like right here and right here you can measure the density of those across relative to the size of the blogs there are a lot of things you can do and some models do well on that this one does very well some models do not just to give you an idea how it all what this pattern might represent I'll go back to this model here we were just now we were looking at tiny patch now I'm going to go back up to this whole patch this massive model here because that's big enough to feed a real image you can actually see real things going on so let's do that now this is not a real model because there is no this would require a huge brain because this would be as if your phobia representation was true across your whole visual field so your phobia took this much you'd need a brain that big to do this but at least this gets back to your point about how we understand an image by even though we're doing little bits at a time we have this massive image and we get little bits in high resolution somehow we make sense of it here the network is getting it all at once so you can make sense of it so this represents some tiny little patch here so we've replicated the phobia across the whole visual field and now we can feed in so this has 500,000 neurons in it the real corresponding map would have who knows how many more neurons in this but this is 500,000 100 million connections and here is an image this is a nice easy to understand image that's why this image is here because you can immediately see there's an orientation like this there's an orientation like that there's an orientation like that and so on so it's a very clear and easily understood image here's what happens after the LGN this is combining the on and the off where white ones are on-cell activity white pixels are on-cells active black pixels are off-cells active gray cells mean that neither was active in no case will they both be active that's just not the way they're defined that's never going to happen they're anti-correlated so this is the same image essentially it's become edge-enhanced no response out here to the Scottish sky just plain gray relatively little response to this gray wall although there's some stuff going on that's been enhanced very strong response to the strong contours okay now if you feed that to V1 this is what you get now this takes a lot of thought for some people this might be obvious but for others, for normal people this takes thought to understand here you've got everywhere is patchy you used to have continuous responses so that this neuron is responding each pixel is a neuron each one of these little dots here that's one neuron in the LGN each one of these little dots is one neuron in the V1 so and if you look along a contour so on the first pass that's a vertical line that's clearly a vertical line all these neurons are activated here's the map here's all the neurons here's the ones that were activated well along here the ones that were activated are the ones that have orientation preference near vertical what are all these patches that are not activated well those I don't know if you can hold my hand there let's see so the this map doesn't show retinotopic preference so there will be some neurons in this area that have the right orientation preference but they don't have this retinotopic preference they might prefer the opposite like right here let's say this is black white now right under my finger there's some neurons that you would think of responding but those neurons probably respond to white black they aren't going to be responding right there these neurons right here they're red they shouldn't be responding because there isn't a horizontal line at that point so the patchiness is all neurons that are present and viable they could have responded but are not responding moreover this does not indicate false this is not an illusion of a broken line if you prefer it prefers a pattern like that what does this neuron prefer a pattern like that if you sum all of their receptive fields up you'll show that it prefers a whole unbroken vertical line in the same everywhere these are horizontal lines so they're going to be responding horizontal whatever orientation that is is going to be orangey yellow orange yellow green whatever it is and so on yes this is the plot of the cortical surface and this is about one millimeter between blobs all maps are on the scale of about a millimeter so this is about a millimeter so that means that this is maybe 300 micrometers in cortical terms in retinal terms in the real animal it depends very much where you are if you're in the phobia in retinal terms your receptive field will be very small if you're over here in periphery it'll be very huge so your mapping from the world to your cortex is variable but your pattern in the cortex is always about the same it's always about a millimeter so always you'll have blobs on this scale the 300 micrometer so blobs regardless of whether this model doesn't have phobia or periphery it would still look like this it would just be all weird crazy angles because of the crazy angled mapping but the actual blob size would be similar there's always blobs on the order of less than half a millimeter or so and then they repeat at the millimeter scale so the size of the blob is not related to any pattern in the world it's related to the cortical surface only so this is the putative meaning of an orientation map that the representation of the photoreceptor level is like a camera the representation at the LGN level is like an enhanced special mode on your camera representation at the cortical level has become quite abstract in that there's an indication there's verticality over here there's an indication of verticality over here there's an indication of horizontality over here and the further away you get from B1 your specific mapping between this and the retina becomes less and less specific the particular location of the horizontal line will be not known because those neurons respond to a horizontal line here and here over quite a range and these are all what are called simple cells in B1 simple cells respond to one particular spot a complex cell actually respond just as well to this contour so that one over a very small range and the further you get from B1 the larger that range becomes so at this level we've started to abstract we've gone away from the input in terms of pixels and it's a representation in terms of the orientations the local orientations in the image and the amount of locality as further away you get from B1 becomes less and less so it becomes less and less first it was about pixels then it was about edges the further away you go from the input the more it becomes harder to express this is very easy to express it's hard to understand but it's very clear that these neurons respond to an oriented edge and at some point you can imagine there are house selective neurons there are claims for this anyway there are Jennifer Aniston selective neurons there are claims for that and the argument is that these maps eventually build up representations of this type now we don't have any data from those areas it would be sufficient to build a model really so you can just say that ok now practicality this was just the very this is the simplest biologically relevant model that I can even think of that explains any of the data if you want you can find vastly more complicated models there are many boxes and arrows in fact hundreds of boxes and arrows in the same diagram with different subpopulations explains lots of data very hard to understand what if you wanted to actually simulate things like this you can do that in any simulator you can do this model I just showed you you can do it in any language it's relatively simple to implement genesis and neuron you've already looked at neuron right what of your tutorials what simulators have you used so far p6 mousse mousse is at the same level mousse is genesis genesis 3 or genesis 5 mousse there was genesis and then became genesis 2 and sort of genesis 3 and mousse kind of offshoot and got implemented and so on upe used to use genesis at the time so mousse is in this category and they focus on neuron level or below and you can put things together into larger areas but that's not what they're built around they don't provide particular support for that and things are very hard to do in simulators like that you can get lots of simulators for this some model I talked about but it's kind of irrelevant you can get lots of simulators for neural networks and in general if you just want an idea of neurons or point neurons and they connect to other neurons there are lots of simulators like that but a lot of the issues involved here are in the in setting up this mapping between areas and these patterns of connectivity it's easy to get that wrong and spend a lot of time doing it so it's not really useful to use a simulator like that there's the one simulator that I think is useful and people do use it regularly for maps is the nest simulator I originally this is something I wrote about it a long time ago the actual core of the simulator is controlled by a post script like language or reverse polish language it's very hard to use but now it's been nicely wrapped in python so that's not really as true anymore but it's not built around maps so it's missing some of the abstractions that I think are useful but it certainly can be used for maps and people do use it for maps and a lot of people use MATLABS for maps because MATLABS is great at matrix times matrix equals matrix anything like that it does really well but a lot of the things you want to do are not that and that's actually a very small amount of your total code most of it is in managing your simulations and collecting and analyzing results it's more difficult there and if you really want to be fast you can write in C or C++ but don't ever do that it's just going to take a lot of time instead of what you should use is this simulator because I wrote it or well I paid people to write it I thought of it anyway and the idea for this is that it starts out at the level where we have data the optical imaging data it starts at that level and then it adds everything in from there so basically it provides you abstractions at this scale this is a meaningful entity to topographica a box of neurons like this once you create it in neuron or genesis or nest that becomes a whole bunch of neurons that becomes 100 million neurons or whatever it is or 100 billion connections those are just all that's all you have you don't have this higher level abstraction about it but in topographica you keep this around you say that you can refer to all of these neurons very easily and manipulate them and all of these neurons and you can connect between these populations arbitrarily not just at the beginning and the other simulators at the beginning when you're doing it you have these abstractions but those disappear when you're actually simulating because fundamentally those simulators are built around neurons and connections and that's it or compartments or even some final level detail so if you want to have maps with a whole lot of different populations a whole lot of different interconnected areas it's really hard to use simulators like that so instead we'll use simulators like this one a design to which 12,000 people have downloaded I don't know who they are they must just be random people in the street because there aren't that many people that do map level simulations in the world there are only a couple hundred of us maybe so I don't know what's going on there maybe people are using it for their school coursework or something anyway basically what this will do is let you rapidly connect a whole bunch of basically you can make a model of the brain in 15 minutes if you want to you just say oh I want 40 areas and I want them connected like this topographical do that for you it's adding detail is hard in topographica whereas in other simulators detail is easy connecting it all into all these big populations is what's hard so it's a trade off at this level rapidly connecting a whole bunch of things it's very easy you have a whole lot of parameterized components that you basically you can make them do whatever you want they're already written it's all in Python so you can control it easily and what you want to do if you want to have this type of the goals of the summer school for bridging across levels Python is very easy to do interfacing to other simulators it's amazingly simple and I'll do a demo of that in the tutorial today so basically you can have low level stuff in here and you can use topographic at a set up that's large scale environment for it have your very detailed model for something but usually in detailed models your model for the rest of the brain is null zero or almost nothing this lets you go it's a horrible model for the rest of the brain but it's better than zero so you can have the amount of the other information coming in if you were going to scale a tiny chunk of v1 you can do that at whatever detail level you want to have this model that brings in everything else to whatever level detail you can handle I know this isn't a very strong selling point but it's better than nothing topographical will give you a model of the rest of the brain that's better than nothing and amazingly nothing is what almost everyone does it's amazingly easy to do better than nothing so even though it's a ridiculous model it's still literally it's better than nothing nothing is very typical anyway you can model the world and all sorts of ways and I won't get into that basically there's a whole bunch of libraries for generating and composing patterns all of that is a separate module you can download that and use that with whatever simulator you want if you like this ability to generate patterns you can take data straight from video cameras whatever you want modules for all of that basically what we do is we have very strong support for things boxes and arrows it doesn't matter what you put in that box if you simulate that in some other simulator I don't care if you can get a matrix of activity a matrix of floating point numbers then it'll work any sort of box like that that has a matrix of numbers that can be sent here and here and then topographical will handle intercommunication between all of these components it'll do all of that at the high level and it'll also simulate what's in the boxes as long as what you want in the boxes is simple if what you want in the box is a big complicated mess it won't do that for you or it won't help you do that but other simulators will and so that's fine so how many people know the difference between a clocked simulator and an event driven simulator nobody a clocked simulator there's a there's a metronome you do compute here nothing, compute here so according on a timeline you compute at regularly spaced intervals over time an event driven simulator say if you have a set of spiking neurons would compute and only when a neuron spikes would there be an event that is then transmitted to these other neurons and you don't compute them at all until they have an incoming event once they do then you compute what would have happened in the times that you weren't computing so that's called an event driven simulator versus a clocked topographical supports either one fundamentally it's event driven and then it works so you don't have to know much about that but if you think about your underlying simulation what does matter is when you're connecting across simulation levels the fact that topographical doesn't have a clock is good because if your underlying simulator has one clock and say you want to connect to some other simulator that's a different clock the fact that topographical doesn't have a clock means you can do that and it won't care it'll just handle events as they come in so that's very useful when you're bridging simulators a lot of simulators that have clocks are more difficult to simulate to other simulators with clocks unless you can get those clocks synchronized that's all that was about I already talked about that I talked about that and as I said there's basically a library of all sorts of components that they'll give you one dimensional patterns that would be like a random number stream there's a whole big library of all possible all sorts of random number streams you might want a huge library of different two dimensional patterns both of these are usable externally for any program a bunch of what we call transfer functions basically take a set of data in and transfer it in some way that's useful for activation functions or whatever a whole bunch of models of what we call sheets and projections a sheet would be like one of the boxes rejection is like a connection you'll do all these in the tutorial when we come back so basically that's it that's the overview of the simulation tools when we get to the tutorial itself we were talking about one model at that point this is the model and this is how it's implemented and this is what you're running and so on this is all freely available of course you can just download anything the book that describes all of it freely available but all the figures from the book are freely available the simple message is you can do a map model any kind of map model you want the more complicated it is you can do it in neuron and genesis again they don't help with complicated models you can write your own code but don't do that instead use this and you'll use this for your tutorial between topographica and nest which is the same for interfacing between topographica and neuron I don't know how to interface between topographica and moose but I assume it's possible in similar ways any final questions alright well hopefully see you at the tutorial we'll go from there