 We're going to tell you a little bit now about what we think is happening in Madrid. And this is very interesting to have you there to present to you. But let's see where we go then. But before that, let me just briefly show you all our labs. So this is our new lab, the lake, and my office is somewhere underneath the sand or underneath the crossroads. You can't see it. It's a very nice campus. And many of you have gone to the back row. Some of you have actually been there yet? Yes. Some people have been to the back row. We actually have an institute-guided collaboration, both with Brandeis and with Edinburgh University. And so they do this by various people at various times. Okay, and we're all ready to come by. So that's where I work. So I'm going to talk a bit about all the, you know, whatever. So my personal favorite interest in all this chemical concentration is memory. So I'll give you a bit of background and I'll tell you about where it's all started and what it's sort of like. I got interested in these questions and many things that we've been steadying down with ever since. So I'll tell you about the different kinds of plasticity that we have. And I'll give you a short clip to the zoom, which is basically showing you exotic specimens of bi-stable systems, which are systems which are information to chemistry. Okay, so do you want to know what that is? Yeah? Okay, good. Now one of these days, and I'm threatening all the young people out there. One of these days you're going to run into your junior colleagues. Are they going to say it? You need to guys actually start information on round things going around. You know, if you met a thing that's going around, it's sort of a power ID claim. So I'm going to the other extreme. I suspect I'm the only person here who's actually looking to throw down on a paper punch on those paper cards. And you're going to throw it away. Okay, I have a company. I have a company that I'm just going to use in. All right, so paper punch cards are... For you guys, that would be amazing. Okay, so this is a form of memory. And this kind of memory, if you don't think I should stay there, works because you can write information and you stay there. You won't stay there forever. But the time force of de-magnetization of any of the leaves of your heart, this is very long. And what the normal analog of this is sort of a fuzzy idea, which we still have, which is incorrect, that physical structure, physical structure is, of course, physical structure in the brain, is nothing more than chemical molecules. You build it and what you build will stay there. But as we already discussed, the life and any individual molecule in your brain, in your synapse, dissolves forever minutes to days. So actually, physical stability doesn't work. And it's not enough just to say that I have built an axon or an android of synapse. And once I have the molecule is there, it has no stored information. It doesn't work. So this kind of stability won't really store information in the brain. Okay, so you all know that kind of memory? Yeah? That's your dynamic run. Now, this is called dynamic run for a very good reason, which is that actually... Good. Okay, actually, it's very difficult to get to a memory. This is a kind of memory where you store information by inserting some charge into your memory elements. But these are all capacitive. And capacitive, as you know, if they have any finite resistance, they do so. And that, of course, is a discharge where they are open for a few seconds for a dynamic run. So you have all these android circuits that basically go through all the contents of a dynamic run. They do this invisibly, so it doesn't affect you. But they go through all of that. Every single memory is occasionally deleted. They really want to be there before it's lost. And then they write it back in so that causes the actual storage to be refreshed. And we're going days of computing. The macro process you have to deal with is not to just take care of it yourself. So this is a way to store information for a long time. But it requires that there's a continuous refresh. And there's actually a neural network catalog on this, which is the reverberating memory. So you say that you have a circuit of neurons, and you add up some kind of feedback process, such that this neuron turns on its neuron and so on. And then you have the reverberating action going around for a few while and then it works as well. So we'll just ask you to put the microphone on. Sorry, we forgot about this. Sorry about that. Start all over again. Okay. I work in this. Okay. So reverberation is another way that you can store information. And there are actually brain circuits which are believed to do this. Okay. Okay. Do you recognize this? Good, good, good. Some people have studied computer logic. Okay. So each of those is a NAND gate, and it's connected in a particular configuration, which is a bistable system. So this is called a static memory, static RAM. And actually, again, a very large class of memory in your computer. A very important class uses this form of storage. And that's the high-speed memory that is built into your chip, into your processor itself. This is the cache memory. So this form of memory, if you actually work through the logic diagram, will stay there in a completely stable manner as long as power is applied. Yeah. And that is the form of memory that I'll be discussing in a little while, where you have bistability, where you have two states the system can adopt, and it'll stay there as long as power is applied, power in this case, meaning that as long as the molecules are refreshed and as long as ATP is available. What is ATP? How many of you have heard of ATP? Yeah. Okay. How many of you know what it is? Okay. All right. So you know what it is, and you've heard of it, right? It's to adenosine triphosphate is the energy molecule, so to speak, in the cell. Actually, well, yeah. Let's just leave it there. Okay. So the question which I'm going to not fully answer, but I'm going to sort of leave dangling, is whether memory is the same thing as synaptic plasticity. Yeah. So I'll give you some arguments for it, and there are, of course, plenty of arguments that say that that's definitely not the whole story, but a lot of information is very likely stored through synaptic plasticity. Yeah. You all know what synaptic plasticity is? Not, not, not, not, not, not, not. Okay, good. Okay, so this is the ability of individual synapses to change their weight, which is the efficacy of information flow from the pre-synaptic cell to the post-synaptic cell. So let me just run you through this, because this is a kind of figure that you'll be seeing again and again. So how many of you have seen this kind of experiment or seen this kind of a trace, right? About half of you. So what you have here is you catch your, this is a typical brain slice experiment, but you can do it in tissue culture and you can do it in vivo if you like, but I'll give you the gory details, literally the gory details, right? You go around, you catch your mouse or your rat, yeah? You remove its head and you take out its brain and do this really, really, really fast. And of course you have to do this in accordance with all the animal ethical procedures of the institute, otherwise you're in big trouble. Okay. So you remove its brain, you have to do it fast because if you take a long time over removing somebody's brain, what happens? It dies, that's right. Good. So you take it out really fast and you put it in oxygenated ringers, which keeps the, which is basically a poor substitute for your cerebral spinal fluid, but it's got enough oxygen and enough glucose and enough good things in there to keep the slice alive for a little while in a dish. And then you slice it into sections about 400 microns thick. And with that you have intact a little circuit in the hippocampus, you have a little circuit where you can selectively stimulate the presynaptic side and record from the postsynaptic side. Okay. So you're doing that and what you can do is every stimulus you record, in this case the field EPSP, don't worry about the details, you record the output and you establish a baseline and we're going to call the baseline one. Okay. And you can see that it's pretty reliable. Each of the little spots represents one measurement. And so for about half an hour you've established the baseline and then you do something horrible. You can do things like pouring in KCL or you can zap it with large amounts of current or many, many high frequency stimulus or put in other interesting chemicals and, but you only do this briefly. So let's say 10 seconds to five minutes or so. And at the end of this you get a huge increase in this case two and a half times increase in the synaptic strength in the efficacy of getting your output given the same value of input. So what the input that used to give that level one of output now gives 2.5. It declines a little bit but then it stays stable for a very, very long time. Okay. This has been measured in slices for up to eight hours roughly and it's been measured in vivo I gather for almost a year. Okay. So this kind of... I'm not at that level but it's still measurable for a very, very long time. So this is a typical experiment which measures synaptic plasticity and the variance of this which will actually measure what happens at one single synapse between one axon and one spine in a post-synaptic cell. Yeah. So this is the kind of measurement which allows you to figure out what's happening in the synapse. Okay. So now you all heard of this famous statement. Yes, no? Familiar? To whom is it not familiar? No one is going to admit. Okay. Anyway, this is Donald Hebb. His rule says and I'm going to hit it yet again when an axon of cell B is near enough to excite... cell A is near enough to excite cell B and repeatedly or persistently takes part in firing it. Some growth process or metabolic change takes place in one of both cells such that A's efficiency is one of the cells firing B is increased. Okay. Otherwise known as cells that fire together, wire together. All right. So this is a very general statement and in fact it's sufficiently general that both the form of plasticity you see here and spike-timing-dependent plasticity can actually be extracted from this... from this statement. And you can see that he did this a very, very long time ago when a lot of these principles were extremely fuzzy. So he established all of this. Okay. So what I'm going to do then is that's sort of the context in which the story is going to play out. So let me start with the good old days if I can persuade this thing to stay put. So the good old days is when I started to get interested in these questions where actually the question of memory was a relatively simple one, right? You needed associativity. You needed to know when an event was worth remembering, in other words. You needed some kind of logic to decide the relevance of this, of the signal that came in. And you needed some kind of switch to store the information for a long time by stable because even back then it was pretty clear that the hard disk kind of persistence of memory or the reverberating switch kind of thing wouldn't work. So you need some kind of bi-stable system. And there were candidates for all of these and that would give you memory. And it looked like a lot of it was already sewn up, right? So maybe that's when I should have said, okay, we've got this sorted out. Let's wrap it up. So let's take an example view of things from around that era. So how many, you all know about Pavlov's experiment? Yes, yes. Now the usual suspects are all nodding. As long as they're not nodding off, it's all right. Okay, so in Pavlov's experiment, just to remind you, you have a doggy. Pavlov was actually interested in the digestive system. Okay, but like a good scientist, he was alert. He was interested in measuring salivation and properties of saliva. And so he had a way to measure the flow of saliva from these dogs. Yeah, so he had a drip meter, basically. A drool meter. And what he noticed was that the dogs which were presented food every evening, salivated, which is what they naturally do. So this is the unconditioned stimulus, right? So without any particular training, maybe it happened in their youth, the dogs knew that the presentation of food is something which should be followed by salivation. Good. However, because this was running in a typical lab, there was a bell that rang a few minutes before the food was given out, presumably to tell the staff that it's time to hand out the food. And the dogs learned to associate this, and what Pavlov noticed was that the bell started to cause salivation. Okay, so this is classical conditioning, the bell being the conditioned stimulus. So now let's take a completely reductioad, absurdum view of this, just a few neurons. This is completely fictional, but it's a good way to start. So here's your food, and your food is connected to the salivation neuron. So when the food comes, this stimulus turns on the saliva. Okay, nice and straightforward. And of course, completely fictional, but let's start with that. Good. So now, let's suppose that we have this sequence where you have a bell neuron, which is the neuron that responds to the bell. And let's say just for the sake of argument, you have another neuron that responds to a light, a light flashing. Okay, now what happens? So you start out, the food comes on, the dog salivates. The food comes on, the dog salivates. However, the bell is always preceding the food. So this pairing happens often enough that in due course, if we were to strengthen the synapse, through this repeated association of bell coming before food, bell coming before food. And if there was a learning rule that said that every time you have something on the presynaptic side here, followed by activity there, to strengthen that synapse, and only that synapse, then in due course you could learn that when you get activity on that cell, you should salivate. Okay. Furthermore, you also would have to have specificity because you don't want the dog to learn that when the light goes on, it should salivate. But in principle, and can and has been done, you can go the other way around, that is you can have the light as the conditioning stimulus, and the bell be something completely random, and now the dog could learn that actually it is the light that matters. So you have to have stimulus specificity. Okay. So the strengthening should only happen when you have a specific association. Okay. So what I've tried to do here is in this toy model to show you that having this kind of logic of association of activity here with hero, and strengthening of a synapse, you can actually get something that looks like a classical learning behavior. Right. That is you see an association of a condition stimulus with a response, in this case, salivation. Yeah. In principle, happening through the strengthening of a synapse in a very specific manner. So this is sort of the toy model and the good old days view of how things might work, but one can be a little bit more elaborate than that. So just to drive home the point, the NMDA receptor actually has a lot of the features that are needed to do this, to do this operation. That is you have your... Let's get back. Okay. Good. So you have your receptor, which is sensitive to glutamate, and it's also sensitive to the potential here. Why is it sensitive? I think it's there on the slide. Why is it sensitive to the potential, to the post-synaptic potential? Yeah. Just as a hint, that's a magnesium ion over there. Mg block. Mg block. But why should Mg block it? Hmm? It's blocking. Magnesium is voltage dependent, so it depends on the post-synaptic activity. Well, magnesium is just an ion. The blockage? The blockage is voltage dependent. Very good. Right. So the idea is that the magnesium is a positive charge and it's sitting there and it's plugging this receptor. And so when the post-synaptic side becomes positive, it just repels it out. And so then you get conductance, right? Exactly as you were saying. So that's the molecular logic there, right? You have association of high potential post-synaptically with the presence of the neurotransmitter. And when you have both of them, then the channel opens and calcium comes in and calcium, as you all know, is sort of the grandfather transmitter. It turns on everything pretty much in the cell. So you get calcium coming in and then you can expect all sorts of fun things to happen, but it does increase in synaptic strength, for example. Okay. So why am I so hung up about synaptic weights? So the reason why this is important and why I think it's a reasonable place to start looking for a cellular or biophysical basis for memory are several. So first of all, there's lots of synapses on every cell in a mammal. Well, for the matter in an invertebrate as well. This is a good large number and so in principle it confers a very large memory capacity. Secondly, synapses are specific, right? It means that one axon will connect very specifically through that synapse to the target cell. And if you want to change things specifically, you have to change that one weight. And so then you get very, very specific strengthening of the input-output coupling. Another reason is simply that if you assume that synapses are a good way to store information, you can actually make lots of theoretical predictions and I'll go over this in a little while, which seem to make a lot of sense that such networks actually do things which look a little bit like memory. And finally, synapses actually seem to be equipped to do all of the good things that we would like to do within what we understand. So let's look a little bit more detail at some typical networks and where synaptic pluses state happens. So you've all seen a network like this, right? Yes, no, maybe? Yes. So this is your standard feed-forward network and the nice thing about this is that your quote-unquote neurons, these little round things, but they're called neurons for the sake of argument, they're very, very simple. They're just some kind of summation rule and some kind of output function. The interesting stuff in this network is actually synaptic weights. And what you can do with these networks is actually quite interesting. Just by setting the weights appropriately with some kind of learning rule, you can train this network to learn by example, as opposed to what you have to do with your computer, a computer which is to train it by explicitly putting information into a particular location memory, which is what you do when you write a file or something like that. So here you just give this network a lot of examples and it will learn something through an appropriate learning rule. Of course, a lot of examples, unfortunately in this case sometimes is a very, very large number of examples. So this is something like your idiot child of neural networks if you use the simple learning rules. This is a way to... Just by changing the weights, you can learn some fairly sophisticated things like, for example, how to pronounce English words, given that the spelling is completely irrational. Yeah? Okay, so this is a very, very old example. This is something that Trasich-Nauski did years ago in a program called Net Talk, which at that time performed as well if not better huge and very tediously assembled database of English pronunciations which DEC had implemented, digital equipment had implemented. But he did this neural network and basically the input layer got the letters of the... that made up the word and the output layer was able to generate the phonemes, the sounds that you should expect it to produce. And it did pretty well on most things, except for, you know, George Bernard Shaw's example of how irrational English spelling is. Yeah? Yes, no? All right. How many of you know how to pronounce this? Yeah? No. How many of you know this one? Fish. Yeah, you know this. Come on, it's a... Enough? Women. Motion. Fish. Okay. All right. Well, I suspect that even Net Talk would have had trouble with that one. But, yeah. Okay, so anyway, English language is completely irrational in its spelling, but this neural network was able to do something with it. Okay, here's another kind of neural network, the auto-associative, fully connected network, also known as the Hopfield Network. This is actually very reminiscent of the history of at least two parts of the brain. One is the piriform cortex and the other is the hippocampus. And without going into details, again, one of the key parts is these... Okay, let me just describe this a little bit. Each of these round balls represents the soma, the stick part represents the dendrites, the lines going around represent the axons and the little circles represent synapses. So this is a fully connected recurrent network, and it's getting, in addition to that, getting hetero-synaptic input, sorry, hetero-associative input from this set of inputs, then you have your fully auto-associative part over there and then you have your outputs. And this is actually pretty good at doing various kinds of recognition and pattern completion. Just for reference, these are the learning rules, so to speak, for the Hopfield Network. And the key thing is that these are synaptic weights. So again, the information in this network is stored in the weights. So this is another reason why we should be interested in synaptic weights as an underlying basis for memory. Of course, it's a little bit hard to see how you get this sort of thing from Hebb's Rule, but in fact, you can do it. And I'm not going to go into those details. Just to show you, these are a couple of examples of what this kind of network can do. So it's trained on the face and some random pattern through the assignment of synaptic weights. You give it a partial face just the top of the bonnet and it remembers it. It reconstructs the face, so to speak, and you give it a very, very scrambled version of the face and it reconstructs it too. And one way of thinking about it is that each of these states of memory of the network is some kind of an attractor and it falls into whichever attractor this is closest to it. Interestingly, this figure also applies to bistables and I will go to that later. Okay, so now I promised or threatened either way, depending on your point of view, to tell you about a bit more about feedback and bistability in chemical systems. So let's go there. So here's a very simple feedback circuit. A activates B and B comes back and activates A. And we've already got some kind of a hand-raving connection. Let's do it a little bit more depth in this. Supposing we do what a chemist would do to measure this kind of a circuit, which is you ask, given a certain amount of B what will be the activity of A? Right? And a chemist would do that by, say, blocking the effect of A back onto B in some clever way or the other. And what a chemist would also do is to plot the dose-response curve. And this amount of B, the stimulus, and that will give you that amount of activity of A. And you can get a very typical sigmoid through this. Okay? Now, let's do the converse and ask to complete the analysis you ask. What happens if you have a certain amount of A and you measure the amount of B and now you get another kind of curve in red this time, which describes the activity of B and to complete the picture, you have to ask what happens now if you just let the system run freely? Now, to do that, what you can do is you can just take the two curves, which are, of course, describing the same system and you plot them on the same axis, which you can do by sort of mentally rotating this along the 45-degree line so that it lands on that. And you end up with some kind of pattern of intersecting curves that looks like this. Where do these curves intersect? Because these are all steady-state input-output curves, any intersection point is a quote-unquote stable point of the system. Let me just run through that in a little bit more detail. If I can. Okay, maybe I don't have the slides. Okay, so the argument can go like this. That is that at this point where the curves intersect, if you were to give a certain amount of B at that level, that would produce just the right amount of A to produce the original amount of B. Okay, so in other words, the feedback as analyzed by these curves means that this is a stable point of the system. And if you happen to have curves which intersect more than once, and you can show that they'll intersect odd numbers of times unless they're then the outer two points are stable points of the system, and the middle point is something like a transition point. Okay, so this system is a bi-stable system because this is a stable point, this is a stable point, and this is sort of like a transition between these. So how many of you have most of you have probably studied dynamical systems? Dynamical systems? Okay, fine. Are you familiar with saddle node bifurcations? All right, here's a saddle node bifurcation. Okay, here's one steady point, there's another steady point, there's a saddle node bifurcation there. All right, so these are systems where you can sort of imagine a marble on the cusp of a saddle, yeah, right there. It can choose to roll there, it can choose to roll there, but once it's rolled into any of the valleys, it's going to stay there and so this is a and this comes out of the chemistry. So it's a bi-stable system coming out of chemistry. So here is one of the earliest models of this kind, which John Lisman from Brandeis studied and he made the prediction that the molecule camkinase 2 is, which has this marvelous property of autocatalysis, that is it turns on its own activity, can in one molecule do all of these things. I mean you have to have the back reaction which is carried out by protein phosphatase 1 in coordination with calcineurin, but the argument is that the positive feedback part of this is provided by camkinase 2 itself. It's interesting that now actually he thinks that this is only part of the story because camkinase 2 does not appear to really play the very long-term role that he had envisaged, but the basic idea is still there and there are some interesting models that I'll be discussing in a moment. And this as you can see has quite a long pedigree, a long term idea and other people have also suggested this around the same time. So now let's go back to synapses and our pavlovian conditioning and say that supposing we had a weak synapse where the conductivity was determined by the amount of receptors say in the synapse at the postsynaptic side and you gave your pairing which led to calcium influx which caused some kind of signaling activity which caused the switch to flip. Right? Let's say camkinase 2 to go on which persuaded more receptors to be present at the synapse. Okay? So this sequence of events so hopefully begins to link up the idea of what is happening at the chemical level with your circuit level change and synaptic strength. So let me just run through that again. You had your synaptic plasticity rule which was that you had to have associativity of postsynaptic activity. So in other words the bell had to be followed by the food, bell followed by food again and again. This associativity through repeated reiterations leads to calcium influx leads to signaling activity leads to the turning on of a chemical switch. When the chemical switch is turned on it causes more receptors to be present which causes the synapse to become stronger. And so now you have learnt that when there's a bell you should salivate. Okay? So this in a very very crude nutshell is the idea that many years ago seemed to have a fairly complete picture of associativity and the mechanism that would be needed to store information for a long time. Okay, so that was the good old days. And now let's take you a little bit forward and just to, I'll just give you a sort of a bird's-eye view of the different kinds of plasticity which we now know are actually present in the synapse. And which not just only in synapse but in other places as well Okay, so here again is the preparation that I described to you right at the beginning. This is your hippocampal slice and what people do is they record from these cells. They give input over here, they record over here and so effectively this is the synapse, this little triangle symbol is the synapse whose strength is being measured. And it's measured in various ways. One is to measure the slope of the signal that you pick up over here. Initially the slope is small and then after it's learned this peak becomes bigger and so the slope becomes larger and you get a curve such as this which I already showed you about. I already told you about this. So the different stages of synaptic plasticity one is called short-term plasticity another is called early, there's late, there's some medium term, I mean there are many, many variants on this. This is all just to orient you to some of the things I'll be telling you about for the next few slides. Now one of the striking things about plasticity is that it is extremely sensitive to pattern and when you think about it it's got to be sensitive to pattern. If you just went ahead and remembered every single input that came into your brain even with 10 to the 15 synapses you would saturate out very very soon. You have to be very selective about what you choose to remember. So it's okay then if you forget 98% of what I'm telling you now because you have to be selective. To only remember the 2% I hope that is relevant to you that's going to make some difference to your long-term survival. After all this is all about survival. So there are different kinds of signals which neuroscientists have discovered are good for causing long-term changes. One of them as we already discussed is that you simply blast the synapse with strong input in a very very short period of time. This is called mast stimulus. So this causes strengthening but it does not cause protein synthesis. So this kind of this is the kind of thing which is like cramming the last minute before your examination. It doesn't work so well. What works is you cram a bit then you wait then you cram a bit more and so on. You revise and revise and revise until you're blue in the face and then you remember it well and this actually causes this kind of stimulus protocol causes protein synthesis a dependent form of plasticity. These are horribly unnatural things to do. Your cells don't go blasting each other with 100 hertz input for a second repeated every few minutes. That's very peculiar. This is a more realistic kind of stimulus. Theta-bust is a natural rhythm in the brain and there are stimulus patterns which are meant to be reminiscent of these natural patterns and this turns out also to give very good robust long-term plasticity involving protein synthesis. Now it would also be a really bad thing for your brain if you could only strengthen synapses. What would happen if you just kept on turning up the dial and the connections in your brain? Anybody? You'd just max out everything, right? You'd literally be epileptic the whole time, right? Your whole network would just go into a wild cycle of self-excitation and that would be that. Right, so you can't allow that to happen. So what goes up must come down. There has to be a way and it actually there are quite a few years when it was not at all clear when it came down to synaptic strength. Now, of course, there are many ways known but here's one of the first clear ways that everybody now uses which is if you just give a steady 1 hertz input for 15 minutes to a synapse it will cause synaptic reduction in strength. Minus means that the strength goes down and this too interestingly involves protein synthesis. You can't do it if you block protein synthesis. Now, of course, this is one of the more interesting and certainly theoretically speaking there's a huge amount of interesting stuff that's been done based on spike timing dependent plasticity which basically asks the question what was the order of synaptic input coming into the synapse? And this almost goes right back to the slide I showed you with the quote-unquote Pavlovian conditioning which came first the pre-synaptic one or the post-synaptic one the idea here is that if the pre-synaptic leads the post-synaptic in other words if there's a causal chain of events so to speak pre-followed by post then you get strengthening and if it's the other way around you get weakening of the synapse. So you have to repeat this many times to really get a significant change but this can be both directions you can have strengthening as well as weakening depending on the sequencing. This is actually pretty remarkable in the previous class I tried to give you the idea that you can do a lot of clever things with the chemical networks and here is just one example of some rather complicated patterns and decisions that these chemical circuits have to be able to do. So let's just leave it at that and say that this is the job of the chemical circuits and they do it somehow. Let me just remind you what goes into synaptic weight so this is a classical equation the weight of a synapse is equal to n times p times q n is the number of vesicles which release neurotransmitter a lot of experiments have shown that basically every vesicle releases roughly the same amount of neurotransmitter each time it goes off and that causes roughly the same amount of depolarization per synaptically. So the number of these if you increase the number of vesicles being released you increase the synaptic weight. That's not hard to understand if you have a greater likelihood that a given action potential will cause release of neurotransmitter you will get a stronger synapse and q is the effective release of one quantum which is basically a matter of how many receptors you have present per synaptically but in principle it could be also by increasing the size of the quantum. So these are the factors that go into synaptic weight and one of the one of the great things for students is when senior scientists disagree vociferously it's great fun to watch and I had the pleasure of being in one of the Society for Neuroscience meetings when there was this vicious debate and I have to say vicious it was actually unpleasant in retrospect there was great fun to watch as a student provided one wasn't in one of those labs there was this vicious debate going on and the other synaptic plasticity is pre-synaptic or post-synaptic and so there was this marvelous session I still remember and presumably there were many other sessions around the world but the session at Society for Neuroscience where the pre and post-synaptic camps were at it and bashing each other and telling each other that your results are complete nonsense and all of that and it was great fun and of course even back then it was apparent to me and I'm sure to anyone else who was reasonably detached from the proceedings got to be right on this one there was lots of good experimental data they're both right and yes now everybody accepts that yes there's pre-synaptic change there's also post-synaptic change and really you have to think of the synapse as a very tightly coupled unit etc etc etc but it was great fun while it lasted so all of these things can and do change when you have synaptic plasticity take home message it is both pre and post-synaptic stochastic release here as Angela pointed out okay there we go so let's go through some of these forms of plasticity and I'll try and go through it quickly because we are heading towards the noon hour when you all go in search of lunch so short term plasticity so there are various kinds of this let me just run you through these facilitation, depression and potentiation if we can good so facilitation simply means that you get a strengthening of the synapse while the activity is coming along so basically it's very very tightly coupled to the high frequency stimulus and once it goes away the synapse starts behaving back down to baseline if you have a slightly longer term effect short term potentiation then it takes a little while for the synapse to build up strength and it takes a little while to come back down but it comes back down to original level so the time course may vary but the end result is about the same so this is not long term potentiation it's not staying up it's remembering something for a short while so to speak but interestingly you can also get a depression effect that is you keep hammering at the synapse and it sort of gets bored after a while and so it's synaptic strength weakens and then you leave it alone and it comes back to baseline and many of these things can be accounted for by looking at what happens with the presynaptic side and calcium buildup so when you first give a action potential you get a certain amount of calcium influx through your voltage gated calcium channels over here and that causes vesicles to be released if you keep bashing at the calcium you can have two things happen one is it's retained in the presynaptic side this calcium builds up and so the vesicles are more likely to be released and so you get a potentiation but you could also get some kind of inhibition of the receptors in which case you would get depression you could even start to run out of the vesicles that are available to release so that too could give you depression so you can get effects in either direction and all of these things could be happening presynaptically so this is a way to get short term plasticity then it's also fairly clear that different kinds of ion channels P and Q channels here are very important and this has been shown through comparing what happens with wild type and you can see that here's your potentiation followed by slow depression has a very different profile from this much more rapidly depressing case where you knock out one of the channels and it was maybe the P type channel so there's the wild type and there's the case with the knock out and so different ion channels with different patterns of calcium influx have a different effect on short term plasticity and then of course there's the vesicle release bit which is that this pool of vesicles which is available if you just hammer out the synapse you can run out of vesicles and then you can have depression so all of these things can cause short term changes in synaptic rate because this is on the time scale of a few seconds to maybe half a minute and so this could be quite relevant for active neural processing there's a lot of things you can do with short term plasticity okay okay now long term potentiation so here we're now talking about plasticity events which stay for basically as long as the system is healthy and as I mentioned it can stay healthy this kind of plasticity has been measured for up to a year so you can have plasticity and so here is bi-directional plasticity so here's a 3 hertz stimulus repeated for a long time and it causes what's it causing, potentiation and depression what does that look like to you go on take a stab at it depression that's absolutely right this is causing depression right and now after this stimulus it's lower so it's depression at 10 hertz what's it doing it's only short term there's nothing for the long term it's still at baseline and at 50 hertz it's causing potentiation in all cases the same number of pulses was given it's just that the frequency was increased and so the total delivery was shorter and so this was an interesting study which sort of was a biophysical implementation of something which I have zipped past well I'll come back to it called the BCM rule so I'll come back to that Bean and Stock Cooper Monroe rule anyway so this is different frequencies of input here are causing different kinds of potentiation and these are all long term thank you so here is your classical STDP plasticity so a different pattern of input which gives either depression or potentiation this is also a long term form of plasticity let's see if it works this time nope I'm a friend I'm going to have to ask you to help out and here as I said are the different kinds of patterns that we discussed and these involve many of them in fact involve protein synthesis to work so it's now become clear that protein synthesis is not in neurons is not something that you have to just sort of leave to the housekeeping guys in the SOMA to deal with it's actually an integral part of synaptic change it's very important in determining the region in which it happens because it's so it turns out that you have protein synthesis happening right underneath the den rights which are triggering the activity right so you have a strong plasticity stimulus over here right underneath that synapse you will get local synthesis of proteins it's very interesting and there's room for a lot of fascinating regulation here and there's something we're working on actively which is that the mRNA still has to be produced there because there's only one source of DNA in the cell and that's the nucleus so it has to be made there depending on what signals are coming up through the whole cell to the den rights but the actual synthesis then uses that mRNA and happens locally so there's a lot of interesting stuff happening with transport and selectivity in this way thank you so this leads to an idea which is somewhat of a complicated slide so why don't we just go back I think the previous slide is a better one for describing synaptic tagging so supposing you had a strong burst of activity over here on this synapse it turns out that your neighbor synapse can now have a weak input and sort of piggyback along the protein synthesis that's happening here and sort of steal the protein that is produced due to the hard efforts of this synapse and also get potentiated so this is a process underlying something called synaptic tagging which basically says that once you've got a strong stimulus in one region in one very localized region of the den right then neighboring spines are also able to much more easily undergo plasticity and turns out this is true both for LTP that is potentiation and also for depression so synaptic tagging starts to move away from the original idea that you had extreme specificity that you have activity on this synapse and that is the only synapse which is going to be affected this starts to bring in the idea that plasticity is actually no longer purely the domain, it's no longer homosynaptic, it's no longer just dependent on that one synapse's activity it also depends on what's happening in the vicinity so heterosynaptic plasticity okay this sort of leads to something that we're very interested in which is okay great, now you have all the machinery right there for building the synapse and like all things in the cell these are moderately nasty chemical complicated networks so what kinds of computation can you do with this and this is something which Pragati has been working on and so we've been trying to build up this is the block diagram version of it so you get your protein synthesis and your activity is there, it goes activates camkinase 3, not camkinase 2 it activates lots of nice molecules and one of the interesting possibilities is that because you're producing proteins which are part of the synapse you have interesting feedback loops here so leaving that aside this is some work that we're very interested in how does the synapse maintain itself I mean it's like imagining you know how many of you have heard of self-modifying computer programs where you write a program which changes the program itself not just change the data but it changes the program itself okay what we have here is a machine which not only modifies the program it's modifying the machine itself so this is like the machine is swapping out in and out different co-processors and so on as the computation is going on the synapse is rebuilding itself and its neighborhood through all of these synthesis events so that's why I find this really interesting conceptually question what I mean by that is that your synapse is a molecular machine yeah it's a machine which is taking signals and is doing some computation and generating outputs but because the synapse is controlling which new molecules get put into the synapse yeah therefore the activity that comes into the synapse is redesigning the synapse itself okay so it's like rebuilding the machine while the machine is running yeah okay just a point of view I find that absolutely marvelously bizarre and you know self modifying programs are bad enough but here you have a self modifying machine and I think that conceptually I just like the idea that you can actually build something which works well when it's modifying itself as it goes along yeah so this is one reason why I'm interested in this okay so just so that you have an idea this is just some results from our modeling studies on this you have different time courses of combinations of stimuli and so on I'll skip over this yeah, feedback so there is now evidence that you can have switches not just at the level of the synapse but at the level of the cell for example if you have different kinds of reporters of cell-wide activity so these are different kinds of stains in different regions of the hippocampus the dentate gyros CA3 CA1 so these are cell body stains and these are stains of I think a C-Fos driven promoter where you end up with entire cells which seem to be involved and if you block protein synthesis then you don't get this kind of turn on the reason why this is clearly involved in memory is because there's some very cool recent experiments where people have used these promoters to express proteins which can be used to turn off those cells or even kill those cells so what happens if you remember something and now you've killed all the cells let's say 20% of your hippocampus which was part of that memory so what happens to the memory it's gone and that's a very strong prediction and that's what actually happens that is you can train up the animal it remembers something in this case some kind of fear conditioning stimulus you knock out those cells the animal forgets it and you can go the other way around you can have a fake memory come in and you can even more complicated things with context and so on so all of these things have been done recently and they iterate the point that you do actually have these cell-wide effects also in memory next slide please so these are different kinds of long-term plasticity that we've discussed now let's move on to metapsplasticity and I'm just going to use a couple of slides here this is the BCM curve, the Bean-Stock-Couper-Manero curve and this is very interesting because what it says, so if you remember that experiment of Dudek and Bear which I showed you at a very low frequency of stimulus you don't get any synaptic change at a medium frequency of stimulus you get depression at somewhere in between you don't get anything again and at high frequency you get potentiation and you can replace frequency with say calcium level or something else but the key thing is that following what happens if you shift this curve following plasticity what actually happens if you can turn on the, go to the next slide please if you shift the curve so supposing this synapse has already learned something and now you shift the curve over a little bit what it happens then is that it becomes harder to teach that synapse something new so a frequency of stimulus that earlier was causing it to potentiate will now cause it to depress so it's not just the fact that you can have a stimulus which causes the cell to learn or forget something it's that the meaning of that stimulus itself can shift and that is meta plasticity so it's not just plasticity, it's a change in the rules that decide whether a given stimulus should cause potentiation or depression so this is another important form of plasticity so let's speed on through this structural plasticity is when you actually have gain and loss of synapses or morphological changes this is now widely accepted as very much part of the plasticity process so again lots of signaling happens causes events which have structural implications and these are things that you can image so I'll just focus your attention on these two slides here's before and after plasticity inducing stimulus and I think you can see that there was this was the shape of the dendrite with lots of little spines there are additional protrusions that come up after plasticity so there are structural changes that are associated with it and these are almost certainly due to these chemical changes which have structural consequences that we discussed right at the beginning okay so there's different kinds of structural change which are expected to be part of plasticity and one of them is that if you have some cooperative effects in where the receptors come into the synapse that can itself cause forms of plasticity this is something that Haral Shooval has worked on and another kind of plasticity which I've run into actually by accident was just looking at receptor trafficking turns out that this is also by a stable process and can cause long term potential and these are some simulations that we ran this actually addresses one of Astrid's questions which was what happens in the very very long term where it turns out that this particular kind of switch has a very very long lifetime it ran for as long as I was able to run the simulation which was for a year it took my it was about a week on my cluster but a year of simulated time and if you go to the next slide you'll see something which let's just skip over these skip, skip skip okay so we've gone a little bit ahead of ourselves I'm trying to I have my eye on the clock unfortunately I'm facing the clock so I can see what's happening if you can go back a couple of slides you can go on a few minutes longer the slides are up here I'll try not to rush it then so John Lisman actually had done John Lisman and Paul Miller and Jiajing Wang and other people had done this calculation where they showed that CAMKINES2 could also store information for a very very long time they actually ran it for a century so good for them so you can in principle make switches out of molecular elements which will store information reliably for a very long time so one kind of another kind of plasticity which again relates to this self-modifying machine idea which I'm particularly interested in myself is plasticity which involves movement of molecules to new places in other words structural changes of some kind so here's one which involves stargazing which is an important structural molecule whose job among other things is to help move amper receptors for your major neurotransmitter back and forth to and from the synapse and it turns on so this is a little signaling network which analyzes this plasticity it turns out that this is something which is also stable under stochastic conditions for a reasonable length of time so you can get a turn on with a brief strong stimulus and you can get a turn off using a longer weak stimulus and this is stochastic calculation that's why it looks so fuzzy and this is based on a general quote unquote theory which I've come up with which is basically an abstraction based on these three principles saying that if you have trafficking between two compartments let's say the postsynaptic density and the inside of a spine if you have these assumptions that the trafficking depends only on the a couple of molecular states on the internal signaling you can actually come up with some equations next slide please which some equations I look like that which I won't beat you over the head with but the upshot of these equations is kind of cool next slide please the upshot is that if you combine signaling with this trafficking if you combine the signaling processes with the movement of molecules back and forth you can get different kinds of molecular identities so in other words you can get switching between you can get basically switching between different kinds of organelle types you can get maturation and state switching you can get receptor insertion this is important for plasticity and you can even get weird things like oscillations and I'll be telling you more about this this afternoon when we do the simulations you'll actually see some of these models okay homeostasis this I'm going to really fast forward and leave it all to Astrid to deal with so I will skip but just to say that plasticity is not just a matter of changing synaptic weights you also then have to balance the excitability of the whole cell and the cell somehow has to have some idea of what is the correct level of excitability that it should be aspiring to and so all of this is again done through this marvelous juggling of the ion channels their kinetics and the signaling pathways that tweak these values so I'll skip over this very rapidly skip skip skip and dendritic excitability and plasticity I think we can skip mostly through that just to say that you do not necessarily have to have plasticity at the synapse again you can have it over fairly large regions of the dendritic spine and these are of the dendrite and these are actually known to happen so let's buzz through all of these I've done lots of simulations on this and I will skip all of those in the interest of time okay so let me just leave you with a visit to the zoo okay Ed in Brazil so here are some of just just to give you a perspective on it here are just some of the signaling circuits that have been proposed that can give you biostability and I actually have found this is something that has taken place over years and so I found it a really surprising story as it unfolded because to me when I started out looking at synaptic plasticity and thinking of biostables as a way to achieve it it seemed that this is a really exotic unusual peculiar kind of chemical thing which was almost miraculous in its properties and camkinus too was sort of the prototypical example of this and then we came up with the possibility of a map kinase feedback loop being one of those there have been various other suggestions involving protein kinase then the protein synthesis thing came up AMPA traffic was one that I ran into purely by accident I was looking for biostability there's receptor clustering and in fact next one please yeah so here's the camkinase 2 one this axis is in years by the way yeah so you give a stimulus here and the switch turns on and it stays on for years so this is just one of the creatures in the zoo the camkinase 2 plasticity if you can go on to the next one this is the map kinase plasticity it looks like that it's just a feedback cycle next one this one is actually not so nice when it's stochastic this is the deterministic curve but the two blue and the cyan curves are stochastic ones you can see that they spontaneously turn on and turn off so this is actually not a good candidate for synaptic plasticity it might work on a larger volume scale like the dendrite but not the synapse yeah so this is protein synthesis feedback yet another member of the zoo next one receptor clustering you've already seen this these are the trafficking bystables you've seen some of this you've seen that seen that okay this is sort of the the family tree so to speak of bystables because I was so intrigued by the fact that there seem to be bystables coming up in unexpected places I worked with a friend Nareen Ramakrishnan and we did an exhaustive search of all possible chemical permutations so you take simple reactions and you make a sort of alphabet of them and then you just build up all possible and make larger and larger chemical systems so we did that and we found that there's one bystable possible in this very very small system with three molecules and two reactions there's just one bystable and this you can actually search exhaustively in these small numbers if you have slightly larger systems you get more bystables and interestingly they're related to this one you go to larger systems still the number of bystables increases and what used at earlier looked like individual new roots of the family tree turned out to be connected up in the higher branches of the tree and so as you go up to more and more complicated systems the number of bystables just starts to shoot up and so we looked and we didn't find multi-stables in the system but gone and done the more mathematical analysis I sort of skimmed through that slide I found quite straightforward ways in making multi-stable systems but really the easiest way of making multi-stable system is just to take two bystables and have them loosely coupled to each other and that can give you multi-stability so it's not hard to do but oddly enough this set of things didn't turn it up you know maybe we weren't looking closely enough or maybe the parameter so this is not a very sensitive way to do it we sampled I think about a thousand parameter combinations for each bystable sort of for each putative bystable and it's quite possible that there were some very narrow regions which we didn't sample so the point is that they're all related and the other point which really surprised me is that about 10% of everything we tried was bystable so far from being this really bizarre, exotic unexpected property of chemical systems it turns out that bystability is actually something rather easy to come by and I think anyway that was a very interesting and satisfying result okay so here's the trip to the bystable zoo you've gotten to see some of these things so let me just wrap up and recapitulate so I started out with a bit of whoops hey now it's working there we are on the right page I started out with a little bit of background why synaptic plasticity is important and sort of toy model of how this might give rise to what we actually would call some form of memory, the Pavlovian conditioning and I gave you an idea of why all of this seemed to be falling very neatly in place some 20 years ago when it looked like we had all the elements for quote unquote memory based on these molecular events then we started to learn that there are actually many many many kinds of plasticity that there is many examples of each kind of plasticity that the whole picture started to get more complicated and more muddy and I would have to say more biological because in biology you have complexity wherever you look and this was no exception to that and just to wrap up I gave you just a glimpse of how many kinds of bystable systems are likely to be looking out there so just to wrap up so plasticity is not just a synapse property though I've been emphasizing the synapse it covers dendrites, it covers cells it covers networks as well and the different kinds of plasticity there are many forms of it there are many mechanisms and it affects everything so what you're hearing is now and hopefully processing into long term memory will go through many many many many events like chemical, different kinds of circuits different kinds of chemical networks before it's finally stored and I think this is an absolutely essential part of how the not just how the brain stores information but how it works in general that it's not, you can't separate plasticity from neural processing it is I think one of the core elements of neural processing okay so with that thought let's wrap up I think that's the last slide, thank you questions? there's a question can you presumably buy stability inside a business sufficient memory because you could have a source that just flicks at the time so what additional properties that might come from the circuit do you need to follow the question? well I'm not sure where you're going with it but you need to be very selective about what you allow to trigger the switch then there's a whole lot of network level questions which help you decide okay I've used up this synapse for storing this piece of information what happens if you want to store some other piece of information that might involve those cells so this is something which I think Abbott and Foussi did a study on not too long ago so that's another level of question yeah I mean synaptics, buy stability is serially homing in on the molecular level there's a lot more going on can you characterize into ah okay I think we probably have somewhere in the region of 10 things that would I would guess 10 sorts of things that happen it used to be thought that okay there is a chemical switch which would do the trick but I think there's evidence now as I said I went through the zoo and I'm pretty sure that there will be a lot more coming my feeling is that there's probably a reasonable but a small number of things which are actually used in synaptic plasticity of the order of 10 would be my guess I mean one argument that's been put forth is that it works very well for especially for long term stability despite stochasticity it works very well if you have a very fast switch and so happens that very fast switches which can respond to very brief kinds of synaptic events are also the switches which store information for a short time and you can sort of think of them as very very sensitive things but if you follow, if you couple a fast switch and a slower switch then this fast switch can sort of hold the information just long enough to cause the longest term slower switch to kick in and so you can have a cascade of switches which would be able to store information for a really long time and yet respond very very quickly so these are some of the ideas that I and other people are kicking around