 I will talk about the effect of dynamic synapses on neural networks and I should have one disclaimer is that I'm a theorist so I take experimental data and I get a phenomenological model and take it from there and see in this talk what the consequences of that model are on the network level and there may be many out there here in the audience that know much more about all the details of the synaptic level which is not really my expertise. So what I will tell you is a brief recap of what I understand of dynamic synapses and the overall story is what the effect of dynamic synapses is in particular dynamic synaptic depression is on computation in recurrent neural networks and so that's what I will talk about. So to recap the dynamic synapse small audience is an old picture from which is this one yeah okay so this is a this is a this is a parameter cell and if you do a pair if you do pair if you stimulate a presynaptically with this pulse then post-synaptically you get in different trials you get this kind of EPSP's and now if you do a pair pole stimulation some sort of a of a you mimic learning process then then if you then repeat this this after this pairing you will see that in particular that the first response is is sort of repeatedly significantly enhanced and the rest is sort of murky and that that first response is shown here so here before the pairing you have this response then you do the pairing and then after the pairing you get this this higher higher response so the this is affecting basically the so the so you see that before pairing the average response that you get of the synapses is about this and after pairing you get this big peak and then it goes down and it is shown here sort of statistically that before pairing you get that as a after pairing as a function of the response number so as this time you see that this this this effect which is called is the presynaptic depression and the this synaptic depression is depends on the on the frequency so here you see left you see before pairing and here you see after pairing so before pairing you see there's this spike or this spike and then there's something like this and then and then after the pairing you see that the first spike gets sort of that gets gets enhanced and that is sort of independent of the frequency but the asymptotic response that you get so the detail of this depends very much on the on the frequency of the so if the frequency is high then you get then this goes down to a low value asymptotically the frequency of stimulation is low then this goes down to a higher value and so that is what's shown here as a some as a function of the frequency the asymptotic value that you get decreases with this frequency so that's a very brief recap of dynamic of synaptic depression and and the the macrom and sodics model of it is involves the three variables presynapse x y and z which add up to one which are fractions of of of neurotransmitters that are recovered effective and inactive and so how should you see that so if if nothing has happened in the past then you may consider that's all the all the neurotransmitter is is is is recovered and then if a spike comes it will give rise to to effective neurotransmitter which which will yield a postsynaptic effect in the postsynaptic neuron so this is the total current is proportional to y then this leaks out by by a time constant which is basically the time constant of the of the epsp itself so it it goes down in a few milliseconds maybe so that's this time constant then it becomes ineffective and it recovers with this time constant tau rec which can be several hundreds of milliseconds so this is the basic model of synaptic depression in which this u is the is the is the sort of a rate and this one can can augment this model by facilitation by having another dynamical equation for u which then is then if spikes happen presynaptic spikes are happening short briefly after each other this u is increased to a sort of an asymptotic value and if the presynaptic activities release it it falls back to its two-space line the value with a time constant tau fact from facilitation which we also see here in this talk okay so how to um how to model this now you can make this this equation a bit simpler by basically ignoring this fast time constant uh this one and basically saying okay well this is just a few milliseconds so let's just ignore this and the main thing is this this big time constant is important so then you collapse these two variables basically and you only and since these sum up to one basically you have only one variable left in which we call x and so you can make a model that says okay we have a uh a dynamic synapse which is a variable between zero and one which gets depressed whenever an incoming spike is coming in with a certain rate and then it if nothing happens it recovers to its to its full value one with a time constant tau rec and so here you see a simulation of that some incoming spikes which are the the green spikes and here you have the the the activity starts for the synapse as one and it decreases and it doesn't do this for specific behavior of this this very simple model so we want to study what the effective of this is in attractor neural networks and just let me just remind you with a few slides with an attractor neural network is a hopfield network it's from the 80s it is a bunch of of neurons which can be either stochastic or sort of continuous graded response neurons or you can put more more realistic neurons in there but always you get the same kind of behavior which is this attractor neural networks which is which is able to do associative memory formation what does it mean it means that that you can store a number of patterns in such a network which are here the black notes so you can think of such a network and build it in a larger structure which may be some some sensory in data or something like that so we have a network and then you you you encode in the connection strength between the neurons certain numbers in such a way that the dynamics of the network is such that it can hold certain patterns as stable memories here we have a picture of that here we have three patterns a spider two bottles of beer and a dog and this is this is a this you can see is a neural network where there is a two-dimensional array of neurons where the gray code is the activity of the of the neurons in that network and so if you want to simultaneously store these three patterns you can adjust the the connection in this that such a way that if you now initiate something that looks a little bit like one of the stimuli the dynamics will actually take it to there and if so if you present a noisy spider you will get a cleaned up spider if you present one beer bottle you get two beer bottles and if you spend one year of the dog you will recover the whole dog and so as a picture a mental picture you get sort of a landscape in in phase space where the activity is is is is attracting two certain fixed points and these fixed points are these these pictures now this is a very nice idea as a memory for for the brain but of course one of the problems with this is that once you're in a fixed point there's no mechanism to get out of the fixed point right so you're stuck there and so what do you do you need something else and so what we will see is actually the dynamic synapses will have a profound effect on this kind of behavior because you can imagine that that's here in such a fixed point this there's one of these black neurons is firing all the time and that firing will do something with the synaptic strength of that connection and actually so these attractors will become metastable and they start to move over time okay so these are the the sort of the formula so here you can model this with binary neurons where the neurons are very zero and one if a stochastic dynamics where the probability that some neuron i at the time t plus dt has the value one so it's spiking given that at time t the activity of the whole network is given by a vector s is given by a sigmoid of some number beta times a local field where the local field is basically the summed input of all the other neurons with these connection strengths and then compared with a with a threshold so and and so you can store these patterns by using something called the covariance rule where these patterns are these binary bit patterns like the dog and the spider etc these psi where mu labels the pattern and i and j labels the neuron label and so if you have this learning rule you can get these these weights and they will actually do what what i just showed you in in simulation now an important number is this alpha which is the number of patterns over the number of neurons and we will see that that the memory is a is depends on to function on that on alpha not being too big but first of all let's look at these other parameters which is beta so beta is the strength of the of the coupling and if beta goes to infinity the sigmoid basically becomes like a like a step function and you get basically noiseless dynamics that this probability will be always saturated to zero one and if beta is small say say if beta zero you get a sigmoid of zero which is a half and the probability is just firing randomly at a half so the beta is a is a coupling strength and and so if you you can think of this system as sort of as a spin system sorry as a spin system and the if there's uncoupled you get like a bunch of uncoupled spins and each each spin is doing its own thing but if the coupling gets stronger they start to listen to each other and they like and if the coupling is positive they like to do the same thing so you get something like this so depending on beta you can see this as a very simple memory this is a memory everything black or everything white right you can make other versions of it whereas where you actually get with the same ease by changing couplings you get here actually a spider pattern or two beer bottles so it depends on on the strength beta to go from here where you have no memory to here you where you have memory and and that there is a there is a phase transition between these two states which is basically given by by looking at the fixed point of the of the dynamics and it has to do with the slope of the sigmoids compared to the straight line and if if this this this straight line crosses the sigmoids three times then you get these these memory patterns and that's the happens when the sigmoid is sufficiently steep or when beta sufficiently large if beta is very low then this only crosses at one point at the middle and you have no memory you get this left hand side so what you get is that the that the memory so the so here you see the overlap of a stored pattern with the with the actual pattern here that total magnetization say is large for very large beta and it is that it goes to zero at this point so there's this phase transition happening here as a functional beta the other thing the other phase transition that occurs here is a function of this alpha which is a number of patterns and so the the fact is that if you store many patterns in this network where many you measure comparative to the size of the network then the network breaks down because because of sort of negative interference between the patterns and that's given so the total picture is given in this slide if you have time let me hurry up so the t is is one of a beta so largely a small coupling so if the coupling is too small we're up here and we get in this no memory state so there there's no memory that we saw before if we now go increase the beta so we go for smaller t then here in this gray area we have memories and we'll be exploring this line in the previous slide but if you now look at alpha then because of this interference effect you get actually also if alpha gets too big the thing also breaks down so in other words in this lower triangle you get memory storage everywhere else you don't get memory storage another question is what happens if we if we now add the dynamic synopsis dynamic aspect to the to the synopsis what happens to this okay so the the full model is actually very simple because we have the same dynamical system but just the way that the dynamic synopsis act into the local field is just multiplying the the synapse right what it's doing so this we just get so and it's good to keep in mind that it's not an x ij so we don't have a number per synapse we actually have a number per presynaptic neuron because the presynaptic neuron has a certain is in a certain way activate all the outgoing sinuses in the same way so we get this this sign this this number here which is between zero and one and it has a dynamics which is given by this simplified solix macrom model with a tau rag in it another question is what is this model gonna how does this model gonna behave so if you if you what you observe is the following result is that you get so if you store one pattern in the network this is a network of hundred neurons and we have a stored a pattern which is 50 percent white and 50 percent black and I should tell you that if you store the pattern with white and black then the anti pattern which is the reverse is also a stable attractor in that network in the simplest setting so what you will see is that instead of it being that pattern a stable attractor it becomes it starts to oscillate between its pattern and its anti pattern so first we have the pattern right so here the neuron so the the some are in the state minus one is somewhere in the state one or zero one whatever you want to call it and then after some time it suddenly switches to white and black and then it goes back etc if you change the parameters a bit you get the same oscillations but it happens at a faster time scale so what what what if you this is the activity of the of the network of the neurons so it is the neurons themselves here you see the activity of x so there are axes that are in the pattern that are active in the activity pattern so in the so 50 of the hundred neurons are encoding for the pattern for the ones in the pattern and 50 of these neurons are coding for the zero in the pattern and we distinguish between the synaptic variables x that are in the pattern active and in the pattern not active and they play a complementary role you see so you see that if you are in the pattern then those those synapses are depressed those outgoing synapses are depressed and the other synapses that were not in the pattern they are not these neurons are not active so these synapses start to recover which is time to go if they recover enough then suddenly this the anti pattern becomes active and you get the network switches to this value and then the synapse the synapses that are outgoing from the neurons that were active in the pattern they they start to grow etc so you get this switching behavior so this is just with one pattern if you now have 10 patterns here where the where you have 100 neurons and the first pattern is the first 10 neurons on and the remaining 90 off the second pattern is the second 10 neurons on and remaining off etc you get this very complex picture of of this switching in this dynamic so we would like to understand how this how this works and what is how does it depend on for that we we introduce a number of variables we look at the mean activity of the of the neurons that are active in the pattern so we we're looking for the activity around one pattern so we store many patterns but we look for the activity around one pattern and so we have these variables so the activity in the pattern activity out of the pattern and for x we do the same so we get four variables and basically we get four coupled equations which are our mean field equations and we can analyze those equations analytically and see what happens so here you see so these these these variables so here's another simulation of the same here you see these variables m and x so m is encoding for the first pattern so so whenever whenever whenever there's a peak here it corresponds to the first pattern a peak of block there so those are these x's the m's and the x's are the recovery variables corresponding to those 10 10 neurons you see them recovering so it's happening okay so if so you can do if you have time again I have no time to go into this detail but you can do a sort of stability analysis of the system and and look at that and you will find that the the previous phases that we had the memory phase where the pattern is stable and the other phase where there is no memory where actually the pattern of all you know the random activity the firing rate that 50% is stable that these two these two regimes get now separated and there's going to be a regime in between where none of these behaviors are stable and that is shown in this picture here so here the memory is stable as a function of the recovery time so here the the non-memory is stable and in between neither of them is stable and you will get these oscillations so in in pictures you get finally the take home message basically this picture here so on if the tau recovery time is is zero we get basically the static sign of the case this classic hopfield network and we've seen that if beta is small we get no memory and if beta is large we get memory so that's the horizontal line around around zero then in the new effect is as a function of the recovery time and we see that for a certain area we get this new switching behavior that we can characterize completely so this is in the case of of only depressing synapses if you now add facilitation then the then maybe this picture is the is the best you can see that the facilitation is here and the recovery time is is here so this is the time constant of the facilitation and you see basically that it doesn't do much because these lines are basically horizontal it does something and there's a whole complex story there but for this I will ignore this so to understand this picture you can look at so the memory phase so this picture for instance you see when we're varying tau tau rec as for a given value of tau fork which is here 10 so we take here 10 and we move this way and we see that these stable patterns these memories they get unstable at around at this point and then they are oscillating and here is no memory state so this line is actually is a optimization and this is this is a very complex first order phase transition so what happens to the storage capacity as a as the last point so you may you may okay so first you may say okay but the switching behavior so what is it good for so it so you have a memory which is stable for some time and actually it turns out that the time that it's stable maybe we should point it out that the time that the memory is stable is inversely proportional to the recovery time constant so I haven't don't have that figure here but if you so you get if you come from here you go down at a certain beta you get here no memory then you start getting oscillations and these oscillations are getting longer and longer and longer and longer until they get the stable fixed point so if the recovery time gets slower the the the plateaus get to get larger so you have a whole range of areas where you have these dynamic memory memories and you can imagine that such a system not acting in isolation as here but in in a in a real world where there are sensory stimuli coming out that there's an interesting way for this to switch attention from one from one scene to the next by the way this picture is showing that this is a simulation of of integrated fire neurons where we where we show basically that the same kind of neural this kind of analysis that we did for the binary network is reproduced in more realistic integrated fire neurons but so okay so we have this this system with the dynamic synapses and what does it affect what is the effect on the storage capacity well the the bad news is is that it's basically reducing the storage capacity now from the 80s we know already that the storage capacity in the hopfield network is low it is you can it's very complex the statistical physics calculation and it is this one can compute this number that the the number of patterns that you can store relative to n is so in the network of 100 neurons you can store 14 patterns there's not particularly much and this number is actually going down if you have the depressing synapses so the recovery time is zero you get you get this this value and that value goes down if you with with the pressing synapses the storage capacity goes down but if you include facilitation actually then in the presence of facilitation and the pressure you can get this number back up again so there is a way to to compensate for that and there's quite a complex analysis of the three variables the recovery time the facilitation time and the and the number of neurotransmitter density that you have and so basically it is either stays the same word goes down the the storage capacity so in conclusions the main findings that I want to report is that is that the synaptic depression and facilitation implemented in a recurrent neural network is a very easy way attractive way to make something we may call dynamic memories these oscillations between different memories could produce very interesting behavior if these these are coupled to sensory data the switching is very rapid I should I should emphasize that so you may think of of having a multimodality and being in this well and then going to the next well which is typically takes a very very long time this phase transition is such that is this switching is extremely rapid as you saw in the simulation someone wants them very because of a very very strange kind of instability in this network so this allows for input sensitivity let me not dwell on that point I already said that storage capacity goes down maybe some of this as a last point some discussion of some related work this this work that we did was started in 2002 and we've been working it mainly my co-worker Jorge Torres at University of Granada has been doing a lot of this work over the last 10 years difference with other modeling work is that they often assume continuous deterministic neural dynamics also tell the binary stochastic dynamics and this is and if that's if you do that then you don't observe the rapid switching that that we observe so I think that is that is quite an essential point of our work oh yeah the second point so this this variable x is one per neuron and there's an old model by Horn and Usher that talked about dynamics thresholds and in fact mathematically that's indistinguishable so you could actually use this model also make this interpretation as x as a threshold value and get exactly the same well exactly same get get similar kind of similar results so the switching could be related to cortical up and down states as they have been reported however similar cortical oscillations can also be generated by hyperpolarizing a potassium current and the possible both mechanisms are relevant the storage capacity has also been to be studied by the BB code be beach cough for very sparse stored patterns so here the pattern that I stored have 50% activity so 50% on 50% off of more realistic of course you have very sparse port patterns but you get basically the same findings and last but not least the storage capacity goes down but independently of it you can talk about the basis of attractions and our numerical simulations actually show that the basis of attraction is actually increased with both synaptic depression and facilitation and here is the original paper by from 2002 and a recent review paper that hooking toys and I wrote in front use of traditional science thank you very attention okay thank you for this nice talk and other questions yes so what's the meaning of this beta parameter so in the real life so could you are defining that depending on the strength of that value the synapses is doing something or or yeah so of course in these models in you never know so these are femmological models so you never know exactly what the strength is you know if you if you so it's basically the strength of the signups right and in this binary model it it's also it's not only the strength of the signup but it also models the amount of of stochasticity in the transmission between synapses so you should think of it as how reliable the transmission of wonder on to the next one is in those terms the other parameter the decay time the fast decay time is not in certain kind of thermal noise in your model which fast decay time in the solid in model in the macramosonic model so you were talking about two decay times yeah one large one yeah so this this small one is so this is basically used to shape the alpha function for the for the psp and so we we in in this binary model we ignore that would you say there's a spike but easy is that nothing certain kind of noise so you actually yeah but but it is it's on it's I think you can say so you can safely ignore that I think it is model because here these these these oscillations they occur on the on the time scales of tens to hundreds of milliseconds whereas this this this it is this tau in is typically of the order of three milliseconds or something like that so it's a very fact yeah I was wondering what is actually the reason that the store patterns remain so stable irrespective of the switching and the continuous changing of the snap strings in the network and we expected any say modernization of of noise which destabilizes the individual patterns so you mean but you would expect that even in the absence of dynamic sign ups it's well in the absence of no no in the presence of dynamic sign ups so if you if you look at at this this simulation for instance there was it's very noisy but so there is 10 patterns stored and so what happens is that at some time maybe so what you actually see what I didn't say that so the patterns that are stored are the first 10 neurons on and 90 off etc right so all these blocks what you see actually that the activity is a mixture of these things and this is because these patterns are not orthogonal and you get all these mixture kind of behavior which also affects the storage capacity so there's a lot of things going on here but anyway so you are in this mixture state at a certain time and then all the sign ups that are correspond to those neurons that are active they get all depleted as a result that those patterns these three patterns say they get these three local minima they start to evaporate but there's many others and because of stochasticity then you jump to any other and the choice where you jump to is quite random in this state where you jumped to but there's all these other attractors there which are all happy and alive because their sign ups are not depressed so you will jump to those indiscriminately to any one of those and so then what is when you get attracted to those and that depends on the realization of the noise that you have at that point and so that that's what happens and then as soon as you jump to this other mixture of patterns or pure state then the previous sign ups get get get they get they get they get back and so then these values they reestablish themselves so you could think of it as an energy landscape where the where the bumps are actually uh they're moving yeah yeah yeah okay thank you