 I'd like to welcome everybody to the first event in the 2014 ISDF seminar series. We're very fortunate to have with us today Professor Karl Heimsmeier, who is professor at the Rupert Karl University in Hardware. Among his many appointments and activities, he's co-director of the EU flagship human brain project, which he'll be telling us a little bit about today. He's also coordinator of the EU brain scales consortium and First Steps Network, both of which share many interests with INCF. Last year at our annual World Congress, which was hosted here at the Carolinx Institute, Professor Meier spoke in a session on large-scale brain initiatives. And we're delighted to have him join us back here again at Carolinx Scale Today to open our 2014 seminar series with a talk entitled From Ions to Electrons, Physical Models of Brain Circuits. So welcome and thank you very much. Okay, thank you. Thank you for the introduction and thank you for the opportunity to give this talk here in this seminar series. Indeed, I was here last year, it was very enjoyable, and it was about the human brain project, and today I will talk very little about the human brain project, but more about our actual work in neuromorphic computing. And the subject here is from Ions to Electrons, and I would like to start my presentation by talking a little bit about the method of science, which is a very general topic, of course. And of course, you know that natural sciences are based on reality. I mean, that is our reference. Whatever we do, if it doesn't describe reality, it's probably pretty useless. So that means whatever we do, where we model or have abstract descriptions or build physical models, as I will explain them later in my talk, they always have to match reality. And that means our reference is experiment. And the experiments, of course, we refer to mostly and initially is the experiments that were done with brain tissues, with dead brain tissues or living brain tissues. And of course, the fundamental experiment, which I know of, I'm not a neuroscientist, I'm a physicist, I hope that was clear from the introduction. So I'm talking about not my field here. But I think that the brain is not the type of grey matter that it looks like, if you see it in a lab. But it's really structured, it's highly structured, and it consists of individual cells that are spatially separate objects. This is one of the famous pictures taken by Ramony Kachal, who is the staining method developed by Golgi. And that already suggests that the brain is performing something which I would call interaction over a distance. So it's very much like many systems in physics where you have galaxies or particles or whatever, and they interact with each other over a certain distance. Now, what we do not see in this picture, but what we know by now is that those cells, the neural cells actually perform an integration process. That means they collect information, and they collect information in space and in time. I mean, through those fibers here, information is collected over a large spatial area. And if we analyze the time structure of the neuron response of the membrane potential, we also see that there is integration in time. And of course, this is all very nicely described by this model, which then has been developed, which is a very naive picture here, where we have the neural cells that collect information through dendritic trees and then send it through those axons to the follow-up neurons. And the connections between those cells are realized through synapses. And the important thing also for the models I'm going to explain is that the communication is based on these action potentials, which we often call spikes. And the important issue that I will refer to all the time is that those spikes are stereotypic. This is at least what the impression is. If you look to a few of them in actual recordings, that means they all look more or less the same. They have certain widths, which is of the order of milliseconds, and they have a certain pulse height, which you see here, which is of the order after depolarization of something like plus 40 millivolts or so. So whatever the neuron does, it always resets its history and passes through the threshold and produces this kind of stereotypical spike. And of course, that is the motivation for a neural computation. It's certainly the motivation for the way we build neuromorphic computers today. And that means if all those spikes are the same in width and in pulse height, that it is not worth going through the effort of modeling this in electronics. So there's probably little point to make these spikes exactly 40 millivolts. Why would you do that if there is no information carried by that number? So you might as well do them 1 volt or 10 millivolts or whatever is convenient for the technology that you are using. What seems to be important though, which seems to be of extreme importance, is the timing of the action potential. That means the time at which it actually occurred. And it's also clear from this picture that this is a continuous scale. So it's not like in computers where ones and zeros are produced at certain time stamps defined by an external clock that sits somewhere in the computing circuit. That doesn't exist here. So the neuron fires whenever it feels like it should fire. So when it reaches this threshold value. So continuous time is very important. And I will use that also throughout my talk. Continuous time and non-clock. It's also clear that those neurons will not automatically on their own be synchronized. There may be collective synchronization effects, which we will also see later. But there is no clock that enforces synchronization. So it's also an asynchronous system. So that is a little bit of the biological motivation of the reality as I called it. Of course there are many aspects in particular or network aspects and all the experiments that people do is behaving animals, for example, which is not my field and which is not what I'm going to talk about here. I'm just collecting the methods of science. The second method is an important one for many of you here, I suppose. And that is theory. So if you observe reality, of course you don't stop there. But you try to build theories and you try to build models. You usually start with models and once you start to detect some general properties of models, you start to develop a theory. There are many examples of that in physics. There are probably fewer examples of that in neuroscience, but there are quite a few already. And the typical model which you can have for a cell, if I say it's an integrator, are these kind of models. A very, very simple model for an integrator is that kind of circuit here where you have a capacitor and a resistor which you can also call conductance, which is just the inverse of the resistor value and maybe some kind of a battery. And you learn the first year of physics that this system can be described by a very, very simple equation. It's a differential equation which describes the change of the voltage as a function of time and it's just proportional to the voltage. And there is this little minus sign here which is very important. It means if you charge up the capacitor and you leave it alone, it will just discharge exponentially and it will reach the value of this battery which in case it's zero, it will just discharge to zero. And the time constant depends, of course, on the choice of the resistance or the conductance and the capacitance and it's just the product of the capacitance and the resistor value. It's r times c. That's the time constant of this little model which might serve as a model for a cell membrane. Of course, we know that there are more complicated things going on. In particular, the neuron has a very strong linearity which is often described by the concept of a threshold. If you don't want to introduce artificial thresholds, there are wonderful models like this four-dimensional model here. This is a set of four-dimensional equations, differential equations which you certainly know. Those are the Hodgkin-Huxley equations and they describe the full dynamics of the neuron including the generation of the depolarization and the generation of the action potential and the refractor period after the action potential. Also that can be mapped into an electronic circuit which now looks a bit more complicated although it very much resembles this thing even if you don't understand that much about electrical circuits as you can see they look very similar. They are the same kind of symbols. The most important difference is that there are these little arrows here in two of those conductances and this means those are voltage-dependent conductances. This is the very important thing that Hodgkin-Huxley introduced. So what we do here is we go from ions which we observe in the biological system to abstraction and then to electrons. Why are we going to electrons? Because those circuits can be easily realized as electronic circuits. Is it really easy to do that? Is it worthwhile to do that? Probably not because you can easily solve these equations and why would you go to an electronic shop, buy the components and build this thing? There would be no point in doing that. Of course the argument only comes if you have 10 to the 11 of those units and they are interconnected by synaptic inputs then you cannot solve the equations anymore and at that point it becomes interesting to really build those systems and that I will explain later. Now of course neuroscience and theoretical neuroscience has developed since those days when Hodgkin and Huxley received their Nobel Prize and developed their model. In fact people have started to develop models that are in a way simpler because they rely on less differential equations. Hodgkin-Huxley has four equations. The first is the two-dimensional model. There are extremely simple equations like the integrated fire model where you have just one differential equations and there is no mechanism to produce the action potential. You have to do that by some ad hoc mechanism. But there are also things in between like two-dimensional models and the typical two-dimensional model is one which was developed in this way actually in one of our projects, the Fussett's project. I will show this slide because it shows also the method. How do you end up with a good model that is simple but still captures very important features of the cell. So what you do is you use an input current which you can feed into a real biological neuron and you measure the response as a function of time. This is some kind of a random input current. You can also put that current into your computer. I mean not physically but in terms of a numerical stream of data and then you can put in a neuron model here and you can see, well, does it behave like my biological data set? And then you can tune parameters here. You see, well, it agrees pretty well. The spikes occur at the locations where they are occurring in biology and then you of course do a very important thing. You cover up this part of the spike train here. You make a prediction, you compare it to biology and if it fits, you probably have some predictive power in your model. So that was done and one of the outcomes in our own work was that we developed this ADEX model, the adaptive exponential integrated fire model which is different from the equation I showed you before. You remember those terms here with the voltage, the minus sign here, the change of the voltage, the capacitance, the leakage conductance, this is the battery. But now there are new terms here. There is an exponential term. There is this innocent little I here which in reality is a sum over thousands of synaptic input currents, excitatory, inhibitory, but very importantly there is a second variable here, the variable W for which there is another differential equation. So that's a little bit like Hodgkin-Axley but here we have only one more equation. So it's a two-dimensional model. And this little equation, this little variable together with the V gives a very rich type of response of this model in terms of the firing pattern and I will describe that later when I show you our experimental implementations. Before I do that, we were still talking about theory here. The subject is still theory. What's the role of theory? And one important question is how can a system like that actually store information? I mean we know how information is stored in bits. An 8-bit number can store 256 different representation of numbers or signs. You can put the ASCII code in and things like that. We know what 8-bits are. That's a clear measure of information. Are spikes also coding information? Can you express that information in bits? Well, actually there are people, theorists, and here I'm again back to theory. Are we still in theory? Claude Shannon is really one of the heroes of the information theory in the 50s and he was investigating the transmission of information across noisy channels to optimize telephone lines and he made some very, very fundamental statements. For example, he said you can measure the amount of information that's in a sign in some symbol X by taking the probability that that symbol X occurs. This is, I'm sorry, I use W here, which is the German abbreviation. It's a Wahrscheinlichkeit, it's probability. So it's the probability for the sign X and you take the negative logarithm of this probability to the basis A and if that basis is A, then he says the information that's stored in that sign is measured in the unit of bits. And in order to illustrate that a little bit, I have maybe a little bit of an entertaining thing here and I will again, this is the last slide I'll show you with German words. I will show you a text, it's a German text, I will show you the English translation later. It's German text because it's a famous text. It's written by Goethe. And it's about how bad the academic system really is and I like it very much for two reasons. First of all, it's very entertaining and secondly, it contains some interesting things. So this is the text. I will not read it to you in German. I will read it to you in English. But what I did here is I counted the character B and there are how many five occurrences of the character B and of course you can now from that you can calculate the probability from that. You take all the characters and you take the B, the five and divide it by the number of all characters and you get the probability. You take the negative logarithm and then you have the Shannon information that's stored there. I showed the English translation which is extremely, extremely entertaining. It says, ah, now I have done philosophy. I finished law and medicine and sadly even theology. Taken fierce pains from end to end. Now here I am a fool for sure, no wiser than I was before. That is what Goethe says. Master doctors what they call me and I've been ten years already crosswise arching to and fro leading my students by the nose and see that we can know nothing. It almost sets my heart burning. This is a very famous quote from Faust and I really like that. Now the B only occurs four times here which means the probability here is lower and that means the information content actually of the B in the English language is a little bit bigger in this text than it is in the German language because the measure of surprise. This is what this negative logarithm really does is higher. Now if you really evaluate and I evaluate the German text here then you see that Shannon information which is also called the information entropy is 5.79 bits. That's the information that's stored in the letter B in that text. So you can do the same thing for neural spikes and I found that very entertaining actually. You have to make assumption here of course. You say you have a spike train. This is a spike train. It fires typically randomly. It was a Poisson distribution of inter-spike intervals of 50 hertz. Of course you have to make some assumption on the resolution which you have to separate spikes and you get the same number. It's 5.79 bits. So a spike, typically a single spike if you do spike coding it typically carries an information of 5 to 6 bits which corresponds to the character B in Goethe's Faust. Which is quite nice. So it means theory also tells us there is information coded in spikes. Now knowing all this we have a wonderful model. We know that information is coded. We can now go ahead and try to understand how information is processed. Now the neurons are simple objects in these simple models which I have at least. But if you build networks of course they start to be very complicated. But I said that the brain is a little universe of many cells that are interacting with each other and so you can probably use simulation methods to address the information processing in the brain. And again I'm referring to cases in physics here. These are two galaxies. You know what galaxies are big systems of 10 to the 9 stars or so and they interact with each other through gravity but also through all kinds of radiation pressure hydrodynamics and things like that. And what people have done successfully is to take observations. This is an observation of a galaxy. This is a simulation and they find that they can describe the structure of galaxies by simulation and many people say okay that's clear because we knew all the physics before that drives galaxy development and so you first find out how a galaxy works and then you simulate it and then you find the degrees. Alright? But I can tell you that's not at all what people do. But I'm showing you this example because many people criticize the human brain project because they say for the following reason you should not start to work on brain simulations before you have not understood the brain. It was only last week there was an interview on the German television of a famous neuroscientist in Germany. I don't tell you who he was and you said the human brain project will fail because people don't understand the brain so they cannot simulate it. And while I'm showing you these galaxies actually we don't understand the galaxies. You cannot understand the structure of galaxies from known physics. You may have heard that there is dark matter for example. Strange thing. Nobody knows what it is. Okay? But you have to put it in otherwise you don't understand. You don't understand the structure of galaxies but in simulations you can put it in you can take it out you can give it certain properties you can switch them on and off and you can tune your simulation so that at the end you understand the global properties of the galaxies. So what I'm claiming and this is a very complex claim you can use simulations to understand which may sound a bit trivial but some people deny that they say you first understand then you simulate and I'm saying as you simulate and then you understand by playing around with your simulation. So what I'm claiming is simulation is a tool to understand it's a tool to understand complex systems. So if you believe me you can say well yeah that's great so we do brain simulations and we will understand the brain now there are is one of these movies which you have certainly seen from Henry Markrum my colleague in HPP and he has simulated a cortical column that's 10,000 neurons of a red's brain 40 days old a very detailed reconstruction of the cells and their connectivity very fancy it's many many years of work very important work and they measure electrical activity and the colors really code the voltage difference between the inside and the outside of the cell. And so what these people do is they put this on a supercomputer which is by now an ancient supercomputer one of the early blue gene computers and it used 100 kilowatts of power and in nature it's about 5 microwatts of power and you can just scale this up to the human brain it's very naive it's probably wrong but I can do it I can just do the multiplication everybody can do the multiplication you know the brain has a power consumption about 20, 30 watts or something like that and this is 10 to 11 neurons and so you end up with 1000 gigawatts for the for the simulation which is of course I mean what does it mean 1000 gigawatts just to give you the idea if I take all the power stations in my own country in Germany all electrical power stations that's 170 gigawatts so it's 5 times all electrical power stations in Germany which you would need to run the simulation and even worse it would it would take it would take 1000 times longer than in biology time scales are all stretched by a factor of 1000 in this case for simpler models by a factor of 100 that means if you want to simulate a day a full day of some kind of activity like learning activity plasticity things like that it would take you three years to get the results and during those three years you would have to use continuously 5 times the total energy consumption of Germany of course nobody will ever be able to build a computer like that but it's also for practical reasons to do this so we have to do something about it I mean if the simulation approach is nice it's brilliant there's a lot you can learn from simulation that's what I claim but in order to succeed we have to build computers that are better than the current ones and in particular they have to be faster and they have to be more energy efficient otherwise it's a hopeless enterprise we also have to work on our models of course make them simpler and I will discuss all these into neuromorphic computing let me discuss a little bit do we understand the power consumption of the brain actually we understand it pretty well how much does a neural computation really cost this is the wrath and probably incomplete estimate of two contributions and it's also two ways of doing this kind of estimate we know that energy is generated by hydrolysis of ATP molecules and you need about 10 to the 9 for an action potential and 10 to the 5 for a synaptic transmission there is the reference here we know that from one of those ATP molecules we get about one electron volts 10 to the minus 19 joules you do the multiplication this is what you have to pay for an action potential 10 femto joules for example for a synaptic transmission now we know how many neurons we have we know the synaptics we have we do some rate assumptions and if we multiply all this we end up with 10 watts in neuron firing 10 watts and that's pretty close to what the reality is so that means basically we understand why the brain is so energy efficient so why are our computers and we will show later 14 orders of magnitude less energy efficient why are computers such a disaster if the brain is working so extremely well well one of the suspicions one might have is well the brain is made of this strange thing like biological material and that for some reason is so energy efficient it can never be silicon silicon is an artificial material and it must be bad okay it's physics and it can never be as efficient that's actually wrong if you if you take a transistor a very naive model of a transistor is just a capacitor charge movements are controlled by by electric fields this is why it's called MOS so this is a conductor this is a conductor there is an oxide insulator in between and you can charge this up with a very high voltage here this is an old fashioned transistor I'm showing here and you know that also the energy you can calculate by taking the capacitance the voltage square dividing by 2 that's the energy stored there and you calculate that it's about one femtochule which is nothing it's very small it's actually one tenth ten percent of what you use for a synoptic transmission so what I'm claiming is a transistor is a really very energy efficient device you can easily charge it up and control the currents you can use ten transistors to simulate one neuron one synoptic for example and it would be as efficient as the brain and the user the the computer's use transistor so then why are they so inefficient? now the reason of course is the architecture it's not the component it's the architecture okay so what we do if we simulate like the Markram model many others these people use von Neumann architectures where you have a data memory where you have a program memory often they are close to each other but they always have to communicate with the processor and that means you have to shift data all the time and you have to shift data very often like for example look to the exponential curve the exponential decay of the signal which we have seen in this simple model of a cell membrane to calculate an exponential on a computer is a very expensive enterprise because you have to do a Taylor an expansion or you have to use lookup tables to actually arrive at a value for an exponential function and there's a lot of data shifting going on so what I'm claiming and I think this is generally accepted is the problem of inefficiency of computers is really the architecture and the fact that they use Boolean algebra to do their calculation which is definitely not what the brain does if you believe all that then you are probably still with me if I tell you there must be another way of doing this and this is what I would then call neuromorphic computing and the idea of neuromorphic computing is an extremely simple one it's just it's that equation you say you take a neural cell and you just make it an entity on a silicon substrate so you make a model on a silicon substrate and you don't write a piece of code but you just define little spots on your silicon substrate and you make them behave like neural cells you place ions by electrons and as a side effect there is no global synchronization I described that before this and there is continuous time and so that also the last picture in this sort of science framework that somehow creates a fourth or a fifth pillar in my picture I said there is model building there is theory there is reality which is the whole reference there is simulation but maybe there is something else and I call that synthesis that is we don't go through abstraction by mathematical models but we directly build physical models this is an ancient idea actually and the time when people discovered that there is the Copaindic Copaindican with the heliocentric system where the sun is in the middle of the planetary system I was very hard to understand for people I mean how can it be that the sun is in the middle and we feel that the whole sky is around us we are in the middle we are the earth now in order to visualize now people started to build little physical models of the universe they put a lamp in there and then they had these spheres where they let the planet circle and so that's the physical model of our planetary system which during those days it was not able to create this mathematically and certainly not on computers but it was very a very efficient way of telling people how the universe at least the planetary system looks like this is actually a very nice painting now that is now really really ancient and we would probably rather laugh about that but people still do this today not with planetary systems but with quantum systems and that's an idea by Feynman whom you certainly know he's a famous theoretical physicist who had lots of weird ideas and one of the idea was to build quantum systems are also like the complex classical systems are very difficult to calculate analytically and even difficult to simulate why is that you probably know quantum physics means that particles have wave functions and if you make particles very cold the wave function becomes very very much spread out and it interferes and these things are very difficult to calculate and so the idea of Feynman and now of people like Bloch and others or it's not 2012 they say what you can do is you can put a substrate and you put atoms on this cold atoms on this substrate and you can trap them with light beams those are the light beams here and then you have a little model of a solid state it's okay it's construction glass thing I see it's not earthquake no I didn't expect earthquakes here but you never know good okay yeah so this is quantum okay and so I mean the point of doing this is that you have this system you can control it you can change the distance you can change the interaction by magnetic fields you can change parameters you can observe it through this microscope and you always get Schrodinger's equation which you would have to solve to describe it theoretically you get automatically solved by the physical equations cannot be calculated on a computer and of course the idea is to do the same with neural systems there is no quantum mechanics involved but it's definitely a complex system and it has these complex interactions which I just introduced so what you do is you take this kind of equation you put it into a circuit which you really build on a chip and you just watch it how it behaves now there are some side effects here and you can look to some numbers here for example there is the voltage the number which I call the voltage swing which is sort of the distance and voltage between the minimum and the maximum which in biology is a couple of 10 millivolts as we have just seen so it's I put in 10 to the minus 2 volts here which is a bit low but roughly that's what it is in electronics it's a bit bigger but it's not that much bigger the leakage conductances or the conductance resistance by the conductance in VLSI they are a bit bigger a factor 100 again that's is already a sizable effect a real big effect is the capacitance is in biology they are about 10 to the minus 10 farad and in VLSI which is very large scale integration electronics they are about 10 to the minus 13 now all those numbers are a little bit different but there is some kind of a conspiracy if you multiply G by delta V and divide by C then you get suddenly a huge difference in biology in 10 to the 6 in electronics why would I combine the numbers like that well the unit of this is volts per second it just tells you by how much is the voltage ramping up or down in a certain time interval in one second it means in electronics in one second with these kind of parameters the voltage changes by 1 million volts okay now that's totally stupid of course because you cannot change the voltage on a piece of silicon it would just burn okay but what you can do is you can only wait 10 to the minus 6 seconds which is a microsecond and then the voltage was just changed by 1 volt which is actually what happens in electronics what it tells you is the intrinsic time constants of electronics is very very fast whereas in biology the characteristic time constant is about 1 volt per second and since we are talking about 100 millivolts or less than that it means that the characteristic time constant of biology is millisecond which is very interesting because how does a system like that know how fast it should run how fast should it simulate a cell it's not given by a clock it's not decided by you as a user but it is decided by the physical constants of the system by the physical properties of the system and that means if you build physical models with these kind of parameters they will intrinsically be very fast much faster than biology now this is an extremely naive way of showing things because if you really build systems like that there are more time constants than just the membrane time constants there are the post-synaptic rise times the post-synaptic fold times there are delays there are all kinds there is the propagation of signals and the axon there is dendritic computations there are many many time constants so if you want to play this game of scaling the time constants you have to make sure that you do this in a consistent way but if you do it you have the freedom to make the system as fast or as slow as you want so what you do you take the model like the ad-ex model which I described and you make a block diagram and then you sit down on a computer and draw a neuron from transistors this is what it looks like it's about 300 by 150 by 20 microns these are actually two neurons and you see this pretty picture here which basically consists of a couple of 100 transistors which is a physical model for the type of neuron I just described and then you do the following building a physical model means you just take one of those neurons and you put it on a substrate and then you can use some automatic algorithm to put two neurons on a substrate and then you can make it 4 and 6 and 8 and 10 and 12 and then you can make a full chip out of it it's just copying the thing that is, I mean I'm simplifying here this is what you call massive parallelism you take the identical cells and you put them on many many times now if you look to this chip this is a neuromorphic chip which we built this is a design drawing not the actual chip you cannot see the neurons anymore where are they? this is the neuron this is the neuron actually they are not really visible here they sit underneath this blue strip here this is because we only show certain layers here and what I'm saying is that the neurons on the chip are they are very fancy circuits really people spend a lot of time designing them but in terms of real estate in terms of area the neurons are really irrelevant they're just occupying these two strips here what is all this blue stuff here? what is all this stuff around? the blue stuff is synopsis so if you want to have realistic input number for synopsis like 10,000 inputs in our case even 16,000 inputs per neuron you need to put a lot of synopsis on the chip there is a factor 1 to 10,000 on each neuron there have to be 10,000 synopsis so that means those chips are totally dominated by synopsis by the synops circuits which I do not discuss here and by the connections so like in the real brain the neurons are important but they don't need a lot of space what needs a lot of space is the synopsis and the connection so this is a design drawing you can also show the real thing this is what the chip like that looks like and you recognize what I just showed those are the synaptic fields our neurons sit here somewhere and all these are connections so on this chip which is about 1 by 2 it's 1 by 2 centimeters so it's a small little thing there are 100,000 synopsis and maybe 400 neurons which is nice I can show a few experiments later that we do with these kind of chips but of course it's still far from the brain I mean there is a factor 10 to the 9 missing in the number of neurons and you have to need a billion of those chips to make a full brain which is not possible it's just physically not very sensible to do this but maybe how can we build bigger systems well one way is to use these little wires here you see there are little wires leaving chip you go to this printed circuit board and of course there is a neighboring chip here you put one there and one there and one there and by that you can build bigger systems maybe 10,000 or 100,000 of neurons on the other hand that's not a very efficient way it's very expensive to do that and also if you have to send information like spikes through these little these little bond wires it's really slow and it's very it's also they tend to break and have all kind of problems so one thing we decided early on in this project is to do this because chips are not delivered as chips if you order them in Taiwan where they are being produced you don't get normally you don't get the individual chips but they give you these big discs which are the wafers okay so this is what you get delivered so you recognize the little chips here with the neurons and the synopsis but you see it's a big thing this is a wafers a 20 centimeter silicon wafer so wooden isn't pity that we actually do so in facets we developed the idea not to cut this into pieces but to leave it intact and to use the whole thing as a network which sounds very interesting but in practice it's not so easy because just as a little side remark you see that there are these squares here which are 2 by 8 chips and they seem to be somehow separated these areas here and that's indeed the case and I mean the chip makers call these reticles and why are there these reticles because you know these chips are made out of electrical lithography and that means what these people do is they sort of imprint a pattern for the for the notation and for the metal layers and things like that and that pattern is imprinted in steps in steps of 2 centimeters okay so you can only make chips which are as big as 2 by 2 that's the maximum size because between this and this there is no electrical connection there is no electrical connection here there is no electrical connection here so what we did together with Fraunhofer Institutes in Germany in Berlin we have sort of another process layer post-process layer which then does all the metal connections here so those are lots of technical problems which I will not discuss here so we built these kind of systems where we have the wafer here which now has 200,000 neurons 50 million plastic synopsis it's an accelerated system it runs 10,000 times as fast as biology and so this is where the neural network is and this thing on top here is looks monstrous it's a 50 by 50 centimeter printed circuit board and it just communicates with this system here it provides the power it can send in information it reads bikes in, it reads bikes out configure the system and so on that this is not just science fiction you see here this is what it looks like in the lab you have two such systems now in our lab in Heidelberg we are building 6 for the brain-scaled project for the experts an important aspect is memory I said von Neumann machines have separate memory and processing this is not a von Neumann machine so memory and processing are at the same place what does it mean it means the memory worse the memory in a system like that it's actually in the cells so the neurons have a memory because they have parameters they have about 20 parameters 21 to be precise which are realized as so-called floating gates which is more or less the technology you have in your memory sticks but transfer it to an analog electronics process the weights of the synopsis have to change fast because if you want to do learning STDP learning short-term plasticity sonic-smart-crum plasticity for example you have to adjust the weight on a very short time scale so you need fast response in particular since this is a compressed time system and here we have little so-called S-ROM cells 4-bit S-ROM cells where the synaptic information is stored the synopsis 10 by 10 micrometer alright so there are huge promises coming from systems like that like neuromorphic systems should be energy efficient because the brain is energy efficient so maybe those things are also energy efficient let's see they should be fault tolerant we are losing a neuron per second I heard 100,000 a day and still we are still more or less able to operate and maybe can observe that on our systems self-organization is important learning there is no software and software is probably the biggest problem of modern computing so it would be great to have that fast we've seen that simulation is very very slow so we can maybe be fast and compact like our brain we don't need a hole full of computers so these are the problem of traditional computers this is what neuromorphic systems promise so are these promises true? I will show you some results these are experiments which we are currently doing and I will just select a few of them just to show you basically what we are doing so we are having four I define four categories of experiments the first one are the most simple and basic ones it means you just look to yourselves and maybe some very simple circuits and you check are they really behaving like what you expect you look for firing patterns you look for effects of synchronization stability order chaos of some random networks things like that so not very neuroscience oriented but just to see whether the thing works then there are two things which you probably know about which most people do the second one here is to implement a test fundamental generic concept and theories those are not things which are reverse engineered from biology but things that come out of the brains of theorists of theoretical neuroscientists and which we test and so typical theories are liquid computing where you may have heard about in fact most of these things we do together with Wolfgang Maas and Graz liquid computing probabilistic inference Boltzmann machines with spiking neurons Boltzmann machines are an older theory but we do that with spiking neurons so this is more generic concepts then maybe the most popular thing and I will show you two examples from that today is biologically realistic reverse engineered circuits in closed loops that is small brains or parts of small brains typically also another criticism towards the human brain project is why don't you start with insect brains and I will show you an example from insect brains to make people happy actually doing insect brains or at least part of insect brain cortical structures cortical columns functional units so that's really biology then number four is the real cool thing is to go away from neurobiology to really take those concepts for information processing and do things which have nothing to do with neuroscience but to use the concepts neuromorphic controllers for engines manufacturing plants pattern recognition in time and space in data streams of course very popular these days causal relations and big data approximate computing so this is really going from the biologically derived systems and apply it for general data processing and I will show you one example so as I said the most the first thing is you look for cells you look for individual cells this is the model our neuron model we have the exponential integrated and this is still an analytical study it's not a measurement by the group of Gerstner Richard Nord the first author he did his PhD on this and facets and what if this is a phase space plot so it's the voltage versus the second variable W the second differential equation and what you see is that you can depending on where you are in this phase space you can make these neurons fire in various ways you can make them regular firing you can make bursts you can make adaptation you can make chaotic behavior that's what you can do in theory so in the real hardware you can do the same thing this is a bachelor thesis actually all these measurements are not bound by students and all that which is very nice it's very easy to use this system so this is typically what you can produce the same firing patterns the next thing is you try to build little networks this is a so-called sinfire chain you know that is at Erzsens group and many others have worked on that so you have excitatory populations you have inhibitory populations you pass on the excitation to the next excitatory population and you somehow have an inhibitory thing which stops the same firing after a while and then what you see you send in an activity this is the neuron number this is time and the activity just travels through the network this is a single neuron which goes through the refractory period fires refractory and so on it's just a regular firing pattern not very exciting not at all but it's a typical debugging tool I mean you just put it on your network and you see it it somehow travels through the chip and that's very entertaining yes right now in terms of your network any of your artificial neuron to any other in the world yes absolutely you can do that provided that you still have the resources if you decide for example I mean I can I can give you some examples if you decide that you have networks with lots of very long range connections then at some point you run out of connections because you have to route that across the wafer and if they are gone they are gone then you cannot go on but if you have only short range then you can use the resources very efficiently but as long as you have resources for routing it's like an FPGA you can connect whoever you want to connect then yes my question is if there is one cell it's connected to many other cells and when the cells receive many inputs in the chip does that to process all the signals at the same time or in a state no no there is no sequentiality here I mean this is an analog model the system the system doesn't know what sequential means it's really it's like your own brain it just it's time is a real variable I mean in order to be sequential you need sort of an artificial time which is imposed by a clock there is nothing like that time is really it's like time running in your brain time is running in that brain and everything is done it's the same degree of parallelism in the real biological system that's the idea of a physical model so there is no sequential processing so what exactly is an analog signal that one piece has to run? that is a very interesting and important question so I mean the actual spike that runs on the wafer is a binary signal so it's it's actually a very boring digital signal it's LVDS standard it's a standard pulse site and a very short width and this is what neurons do now in order for the neuron to integrate it needs the right the right post-synaptic time structure so what we have is a so called synapse driver it's a circuit that takes this delta function and it transforms into a post-synaptic potential which has this characteristic rise time and the characteristic full time which you can adjust actually as a user you can make it long and short it's really a physical model so I have to push time a little bit now I have to show this in Stockholm this is a cortical layer 2,3 this is now really based on biological data you know we know that there are cortical columns of course which have been demonstrated in this paper and there is a model which has been developed here by the group of Anders and many of his coworkers which is a tractor memory network where he has this kind of mini columns here where many of them form hyper columns and it's basically these winner take all ideas so there are these are the mini columns 1, 2, 3 and they represent certain hypotheses so this one is labeled number 3 and this one is also labeled number 3 and there are many other number 3s and all the number 3s support each other by excitatory connections so these are excitatory connections but number 3 hates all the others like number 1 okay and it expresses that by inhibiting those these are inhibitory links here so it tries to suppress the others and I mean and then there is inhibitory basket cells here and there is maybe also some external excitation so if you leave the system alone and they all have the same right the system would never decide which hypothesis 1, 2 or 3 would really fire but if you have a little noise in the system this is the software simulation and these are the neurons here it's the scatterplot and you see these are the 3 the 3 hypothesis here 1, 2 and 3 it always jumps from one to the other this is called an attractor and it always sits only in one of the situations so we can do the same with our hardware you see it's a bit more noisy but you see this typical attractor memory type behavior winner takes all behavior which we also see in our hardware that's also still a relatively simple model the next thing which has only been published last week so I'm proud that I can show this now this is a penis paper and this is actually a paper that has not been written by us it has been written by our users so we provide access to the system through software support which I didn't describe here and so people can do experiments this is Michael Schmucker he sits in Berlin 800 kilometers away from our system and he runs experiments and what he does he has this model he has receptor neurons and then this projecting neurons which have some lateral inhibition here through this excitatory and inhibitory populations and this circle here describes what's in the insect is called a glomerulus these are the glomeruli which coat certain categories of odor okay and so what is being done here is a decorrelation which then is projected to this association layer for which there's also excitatory and inhibitory and what you do in order to associate the categories of odor to certain plants for example is you train these kind of connections here the laser is running out of power so you train these kind of connections and this is a learning process so this is a supervised learning process so you take this kind of architecture here you verify that it's really existing in real biology then you have to think what are the receptors and how do I what do these people do is they use virtual receptors called VR and as a test data set they actually use flowers but they don't use the odor they use the shape of the leaves and the color there is a standard data set the so-called iris data set which is used to study neural networks and so what they do is they do a so-called principle component analysis here they are in those plants and the three different types of irises are represented by the three colors they're green and brown and violet and you see they are kind of well separated but of course if you use only one of the components there is a high degree of overlap so what they did in this network experiments they placed receptors those are the the black crosses there and those receptors then did the coding of the response so if there is a strong response from a certain input that particular receptor gives a high rate so if you put that into the network it behaves like that this is a network of 200 neurons or something like that there's time running there this is biological time by the way not the accelerated time and you see the receptor neurons at the bottom which sort of fire and somewhere in this pattern there is the information hidden of the different flowers being trained and on top you see the output of the associative neurons and before training you have to look to this rate here you see there is no distinction between the different hypotheses there are three of course in this case and here after the training you see that there are two spikes so there are two type of neurons which fire with a higher rate and the others are switched off why are there two spikes here because there are two populations in the association layer there is the excitatory one and the inhibitory one and they both fire with the same rate basically so it means you can use this system to do classification of data now that's of course kind of boring because people have been doing that over the last 30 years this is in a way classical neural network approach and it's not surprising it's done with spiking neurons here which is nice really brand new so what would be cool in neuromorphic computing is this what are the holy grails of neuromorphic computing for example we want to use faulty and diverse devices because this is analog electronics like in our brain the devices are kind of unreliable all neurons are different we want to go to real world applications outside biology we want to use learning and plasticity we want to be energy efficient and we want to be fast now let's go very shortly let's start with energy is this energy efficient? indeed it is this is our brain we said synaptic transmission is 10 to the minus 14 joule 10 femto joule this is a very detailed simulations in Markram's model it's in Markram's computers it's 1 joule so they are 14 orders of magnitude if you have simple models running on supercomputers in eulich for example it's about 10 to the minus 4 0.1 millijoule which is still 10 orders of magnitude from biology just like the one we have here they sit at 10 to the minus 10 and this is the spinnaker system which is a digital system it sits at 10 to the minus 8 so these systems are already now really efficient many many orders of magnitude speed are they fast? well yes they are indeed in nature there are many different time scales for example STDP detection of causality you all know that it's acting on a millisecond learning is days development is years evolution is millennia if you run that as a simulation the factor thousand slower than biology development will take 100 years plasticity or learning many many days like months or so so it's really it starts to be impractical I said that before with the compressed system like the chips I have shown you can do detection of causality on the 10 nanosecond level which is not difficult in electronics in analog electronics the price you get is that you can compress a day for example to 10 seconds a biological day to 10 seconds so it's probably the only way to really understand the time development noise and diversity of components there is a famous group here the international technology road for semiconductors and they always make this point that devices and electronics become less and less reliable and they always are astromorphic architects that should be able to deal with uncertainties are we doing this if you look to this paper which I am referring to this is the firing rate of neurons as a function of the input rate of course it's kind a linear relation but if you look to different neurons in a population there is a huge variation like 20% or so circuit, that's analog electronics, there's also temporal fluctuations. So if you just repeat an experiment again and again, trial to trial, there's also a variation. So this is temporal noise. So this is indeed a very unreliable system, but due to the population coding, which we do in the projective neurons, in the decrylation neurons, in the association neurons, this population code actually can deal with the large fluctuations. So this is also demonstrated. There is another thing here that people have also now transferred this concept to problems outside biology, like for example, character recognition, and it works very well. So I'm almost through, I think, very shortly plasticity, a very short comment on plasticity. This is an owl, it's a barn owl, and it has big eyes, as you can see. At this point I'm referring to the ears, which you don't see. And the ears are important for the owl, because it can locate things in dark. If I close my eyes and somebody talks to me, I'm not so good to locate where you are sitting, but the owl can do this extremely well. And it's because it can resolve time differences by measuring phase differences. How is it doing it? Well, if this is a mouse, and these are the ears of the owl, then the path for the sound is short on this side, is long on this side. In this case it's the same on both the left and right ears. And the concept biology has discovered is that this kind of difference of the propagation time is compensated in the brain. So the short path here is compensated by a long path here, and vice versa, so that the total runtime until this coincidence here is detected is the same. And this is an ideal case for STDP or for coincidence detection. So we have done these kind of experiments. I have no time to go into details here, but this is again a bachelor thesis. Ankatrin, Scherzadiet, and our institute, there is a paper which came out of it. This is actually three months to work now. It's really fast to do this experiment. So these are synaptic weights. You see the 16-bit steps here. We start with weights in those networks by number seven, which is sort of in the middle of the range, and then you start the learning process, the STDP process, the coincidence detection to compensate for the propagation difference. And you see that the synapses develop into two populations, which corresponds to the phase shift of minus t-half and plus t-half. And the timing precision this system actually achieves is pretty good. And why is that? The biological oil can detect a bit better than a millisecond, like a hundred microseconds or so. This is an accelerated system. That means the accelerated system by a factor of 10,000 or something like that has a precision of 10 nanoseconds. So it's a timing detector, which is good to 10 nanoseconds with very imprecise synapses, because those synapses also have these 20% variations. But of course the learning process makes sure that these things are compensated. Very shortly on the human brain project, I will flip through a few slides, we are going to build large-scale systems now, because we have plans to really do biologically relevant network sizes. So what we want to do is to build in our own lab in Heidelberg a system of 20 wafers, which would have four million neurons and a billion synapses. We use a computer cluster here, and this is sort of a mixed system. It's conventional computing and neuromorphic computing. And the idea here is to do what we call closed loop experiments. So there is neural processing here, and of course we want a perception action feedback loop. So we have to produce data somewhere, like visual data, for example, which are stored on the computer, it's processed by the wafers and then fed back to the simulated environment, so that we can really study learning processes. I have to show you this picture. This has been taken actually this week, I think, no end of last week. So we constructed a special building here, and in this blue box there will be our 20 wafers system. They were going to scale this up. I point to the air condition system here. We're always saying low power, but at the moment we still need 50 kilowatts of air condition, and of course it has various reasons. There are conventional computers there, and also it's an accelerated system. You may only pay 10 to the minus 10 joule per synaptic transmission, but because of the large number and the acceleration factor of 10,000, there are a lot of synaptic transmissions, and it costs a lot of power, not energy, but power, energy per time. So this is under construction now. It's supposed to be ready end of 2015. We have more or less two years to finish that. What will be the workflow in HPP concerning neuromorphic computing? Is our supercomputers useless? No, absolutely not. You here know very well where it all starts. I try to make this point. This is what you do. You provide the data, integrated data. Yes? When you have multiple wafers and you connect them, can you still guarantee connection between them and the networks between different wafers or not? Yeah. There are various options, and one, of course, is to connect them to an entire network. We can go off wafers and make networks that go across wafers. The other option is that we share the wafer. That we have 20 users, for example, because we will have remote access, and then that people do smaller studies on individual wafers. But you can also share. That's possible. I can give you some more explanations in the discussion. So it all based on data, integrated data. Then we will have, we do circuit building and simulation and visualization. These are simulations on supercomputers. Really very, very detailed simulations with the really many compartment individual ion channel cell models. This is what the EPFL group does. They expose that to a robotic environment. And then we will do an important step. We will reduce complexity going to simpler models. And then we will export those models to our system, or to the Manchester system, which I didn't describe here. And then we exploit the configurability and also the speed. And we will search the parameter space to really come to solutions that are interesting. And then, of course, the idea is that we have all these dreams for which are down here. So this was my last slide. The conclusions, the most important thing, I think it's a consistent concept for a non-farmware, non-turing computing architecture. There are some important features, universality. I forgot to point that out. It's really, you can implement very, very different circuits. It's a configurable system. It's hopefully scalable for tolerance. I didn't demonstrate, but we did that already. It's power efficient. It can learn. And it has a high speed. And I have to thank a lot of people here because this is a huge development, which of course did not start with HPP. HPP has only started now. But in the past, it has been this consortium here, which was incredible. I mean, this is a brand scale. And it was preceded by facets. So these are about 100 people. This is how I'm really meeting almost two years ago now. And those people have been amazing because we were always told it's impossible to bring together biologists and modelers and mathematicians and physicists. And here it really worked. We really demonstrated that, for example, we can come up with our own models. We can implement them in hardware and make them work. So it still is an incredible group. It was all financed by this activity here, the Flying Tree, which is the FED initiative in Brussels. This is our group in Heidelberg. And all these very serious people with dark suits here, these are the sub-project leaders in HPP who will drive the future. Thank you very much. I wonder which is the good way of thinking about variability of difference in the brain. Because you brought up the example of this rising time of excitatory percent-active potential in different neurons. And obviously, it's different. Now, if you go to the same neuron and you stimulate the same neuron with a visual input, you will get over time different responses, you know? This trial-to-trial variability. Then if you go to the morphology of individual neurons that take just one class of neurons, being pyramidal cells, you will find out that they are all slightly different from each other. Then if you go to the axons, you will find out that the axon conduction is also very different from each other. So to me, I am puzzled by this. And it looks as if it is, you know, it would have been probably easy for a biological system to just duplicate genes so that it is going to make chips identical to each other as you do for your computers. But it hasn't been done there. And actually it looks as if the difference has increased with evolution. So I'm really puzzled. Well, I guess we're all puzzled. But I mean, there's one thing I do not agree with. I don't think it would have been easier for biology to make all cells equal. I think it's rather natural that they are different from the way they grow. And it's also in electronics, it's natural that they are different from the way they are produced. I mean, it's the lithographic process. It's the rotation process. All this increases, introduces variability. The amazing thing is that that biology has learned to maybe tolerate it, but probably even exploit it. And of course, there are amazing papers out there, which we read all the time because to me, that is the most exciting research topic for the for the future. And you probably know the papers of Eve Marder. She's a neurobiologist in the U.S. And she studies the lobster. It's a very simple system of three neurons, which somehow controls the gastric system. I'm not a biologist. Okay. I mean, it's somehow the how the things in the intestines of the lobster actually move. So it's a relatively simple system. And what she did is she more she has a model for those neurons. And she produced, I think 20 million simulations with different parameters, 20 million simulations, really amazing. And what she found that many, I think about four million or so of those gave the same results for the behavior of that system. Although the parameters are dramatically different. That means there are always many, many solutions to a problem with really different parameters. And it's that kind of process that we have to emulate in our system. So it's not that that that you need a very special variation of your neuron, but there are always many different solutions to use neurons with a variability to solve a certain problem. And that is something one has to study. You can study it by simulations like Eve Marder did. But of course, it's very difficult. I mean, if you always have to check 20 million simulations, even in a three neuron system is already a big effort. If you do that in 1000 neuron system or a million neuron system, it starts to be intractable. So that's also where I hope that our hardware, because of the speed and the high reconfigurability, we can try out many different combinations and find out how these things are being exploited, the variations between the neurons, that's among our, our tasks for the future. And of course, it's, it's a result of evolution. That's what you say is absolutely true. With a system like, like we have now, we could even try to emulate evolutionary processes, of course, because you can go through many different variations of parameters. So this is what we plan, but I have clearly no answer to your question. Definitely not. So what's related to this question? How much variability is there between neurons? In nature, in our case, it's about 20% is a good number. Okay. So typically, I mean, what are what are the conductances, reversal potentials, time constants. So it's, it's, I can show you many plots, it's in some parameters, it's more, in some it's less, but 20 to 30 percent is a good number. And so far, we have used an approach which was very much driven by, by simulations, because in simulations, people normally have no variations. And you have a neuron model, they're all the same. And, and, and so our approach initially was to make our neurons as close as possible to the simulated neurons. So you see, we have all these parameters storage. So what we can do, and it's a tedious process, we can measure parameters like the synaptic strengths, like the time constants, like the reversal potentials. And we can see how they deviate from the mean. And then we can calculate a calibration constant. We can store the calibration constant in our system. And we can make all neurons the same to some extent. It was a variation of only a few percent or so. That is what we call calibration. And if you do that, you end up with a network that behaves like the computer simulation. And in the past years, this was always seen as the measure of success. And see, people said, if you agree with computer simulations, like nest simulations or neurons, then you make a system that's useful because it behaves like the computer. It's just a little bit faster. I tend to, well, it took me a long time, I tend to disagree with that, because we should not make our systems like the computers, but we should learn to exploit the variability. And as soon as we have theories or ideas for that, we are ready to try that. And to give up, I would be more than happy to give up this calibration procedure because it's long and tedious and complicated and it takes forever. And if we ever want to build a system with 10 to the 15 synopsis, in any case, totally, you cannot calibrate 10 to the 15 synopsis. At that moment, latest, you have to learn how to use what you get. Maybe I missed that in your presentation, but with your wafer implementation, do you allow for long-range connectivity or short-range connectivity on a different basis? No, I mean, first of all, I didn't explain that really here. There is this wafer, and as I said, there are these switches which we can set, so you can connect any neuron on the wafer to any other neuron on the wafer. Now, it is, from the point of resource usage, it's not very wise to have a neuron which is at the bottom to connect it with another neuron that's on the top of the wafer because you use a lot of the routing resources to reach that other neuron. So the wise solution is to use the on-wafer routine for the short-distance connections. And in biology, systems are often totally dominated by short-distance connections and long-distance connections are rare. What we can do, we have another layer of routing. You can go off-wafer. There are sort of 3D connectors. So you can have a spike which leaves the wafer and then it's being routed by those FPGAs which sit on top. It can be routed back over a large distance back to the same wafer. It can also be routed to the next wafer. This is when you go across wafer. So that's possible. It's crucial for your simulation. Yeah, it's crucial. And you also see that one claim which we always make, and I also make, is there is no software in the system. And of course that's a huge lie. I mean there is no software to run the development or to run the system but as the enormous amount of system to configure it. And the most important thing you have to do at the beginning is you get a network from theory or from biology, databases or whatever, which has a certain connectivity and then you have to decide where you put those neurons in an intelligent way that you make good use of the neuron of the routing resources. And the problem of course is that biological systems are three-dimensional and the wafer is basically two-dimensional. And that is a big problem. So the routing and the mapping of the network to this artificial system is a huge task. And we have worked on that very hard. I mean many many PhD students worked on optimizing the the routing and mapping algorithm and it now works. But it's clear that it's much easier to implement networks that are dominated by short distance connections and have rare long distance connections than for example randomly connected network. If you have a random network where short and long distance have the same probability that's very hard because there are all these long distance connections which steal the routing resources. But that's typically something you have to work on. But for example if you take Henry Markram's cortical column or any column model that's very nice in principle because there's a lot of local connectivity in the columns and there are very rare long distance connections. So that's it's very good for us. But it depends what you want to do. I have two questions. The first question is as far as I understand the neuron in your chip they kind of implement some simple calculating function. Like when the rule that integrates the signals right they can implement some simple calculating function. Why I say simple is because the neurons very neural they are conductive plates. It's complicated. Combination of ion channels they give their new neurons a very like a powerful computation capacity. It is possible that in your cells on the chip that can also reflect these kind of capacity like the ion channels. I mean I'm not so sure it's a conductive space model by the way. I mean that we have already. But I'm not so sure I understand your question. It's a I mean the the neuron is not calculating as such. It's not carrying out a calculation. It's not working with numbers or anything like that. It's a real physical model. So what you have is currents flowing into the system and they are being integrated according to the model which I showed you the adaptive exponential integrated and fire model. So you can vary that. You can vary parameters. You can change the threshold. You can change reversal potentials. You can change conductances. But you cannot fundamentally change the model on the hardware. That's probably the single most important disadvantage of our approach because it's an analog circuit. You have to live with what you have. If somebody comes with a radically different neuron model you will have to rebuild the system which will take you two years or something like that. So you have to live with the parameter space that is provided. But otherwise within that configuration space you can do what you like. But it's an analog model. It's not it's not doing calculations. It's really the voltage changes in real time in the chip real time. Okay. So another question is kind of also the same as just us. It's about the design of the connectivity between the neurons. Is it possible that you can design the chip, the connectivity in such a way. You first you design the four connections between each cell. And then you can selectively shut down you know what you want. Well you cannot I mean as I said you have to live with the resources you have. I mean that there is a certain amount of routing resources and if you exhaust them you cannot create new connections. But of course what you can always do is to switch off connections. You can you can switch the weight to zero and then the connection disappears. I mean a very important question which we are always being asked. I don't know whether this is your your question is can we grow connections. For example to emulate developmental processes. Now of course we cannot physically grow connections. But what you can do is you can have you can start with a very sparse network where you have only a few neurons. Maybe you use only nobody forces you to use 200 thousand. You can use only 100 neurons on the wafer and then you have them connected in some way. And then you have a lot of free resources which you can then use during the operation to reroute. So that is in principle not at the moment but it will be possible to do this. To study developmental processes for example. Which would be important because these are slow processes. But we are not doing this at the moment. In your model we keep talking about the electrical properties of the neuron. What about the synapses the chemical part of it the effects of the diffusion of the physicals and so on. Is that taken into account at all? No I mean we are always these are chemical synapses of course. The ones we have in our model but they are modeled by an electrical model. So we have the equivalent electrical model of the synapse. At this moment the only things that we have in our is the shape of the post-synaptic potential. Is a short-term sorting smartram type plasticity, facilitation and depression and STDP. That's what we have and you can change the time constants. But it's an electrical model of a chemical process. For example there is no stochasticity. We know that if you analyze the signal transmission across the membrane and the the synapse cleft that it is quantized. And of course we don't have that. There is no stochasticity in our synapses. Some people say that's important but this would not be easy to implement I have to say. I need to mention about the spike behavior but do you also simulate or does your model also come for sub-streshold activities? Yes yes I mean all the sub-streshold activities there. No no you can you can even read that out. I mean the important thing of course if you have a working network the important thing is that you can access it. You can for example do recordings like you do from a real animal. Now you can record all spikes because they are the means of communication in the network. You can also from selected neurons which you select as a user you can read out the sub-streshold membrane voltage. Not all at the same time but you can switch. You can say in this network I want to look at that neuron please tell me what the trace looks like and then you can get it on an oscilloscope on your database. Yes so this this means we need ADCs technically. There are analog to digital converters so we have a big pile of ADCs and you can switch them to any neuron and then do the recording that's possible. I yeah for example the in the sinfire chain there was a membrane trace I showed that that's one example yeah. We can also talk over coffee. You were talking about now you're trying to embrace the philosophy that you should no longer try to follow the computer simulation and so on but that you should try to exploit the properties of your system and I wonder what's the actual or is there been any effort to try to find applications for this outside the realm of biology so to actually include it in robotics or something like that to use it in I don't know as an image processing system in an actual robot or well I mean I would call image processing and using in robots I would really still call that the realm of biology because the reason we have brains is to control our arms and to do vision and all function and things like that so that I would I would call it is the classical application of neuromorphic computing and I mean one thing I should say is of course since this is an accelerated system this has a lot of advantages but it has one major disadvantage and that is it cannot be connected to normal robots because their time constants just the mechanical time constants are like our own time constants and and and so they would move much too slow for our system so they are not matched so if we do robotics or pattern recognition or things like that it has to be done with simulated data this is why we have these big pile of computers next to our system where we simulate artificial robots and artificial sensors and artificial actuators so it's really simulated data by the way this is also true for the super computer simulations in NHPP for a different reason they are too slow I mean the super computer simulations are slow they cannot interact with real robots ours are too fast they cannot interact with real robots there is the spinnaker system which I didn't talk about here they are running at biological real time and they would be able to work with real little robots which you control I mean outside the realm of biology as I said is the most exciting thing to me but those would really be radically different things I mean it's in particular data analysis and and and the question of making predictions I mean the most important thing our brain does that it makes predictions and it makes predictions on on on on the learned experience if I take this this thing here and I let it drop I can predict when it hits the table although I have never done this experiment here before but the physics is in my brain because I learned it from from many other experiments and and to do that on more abstract data I mean that would be one of the big dreams for the for the longer term but we are definitely not there at the moment I'm just wondering you never mentioned the good memory in your this course somehow yeah memory is one of those words which is a problem and projects like that because depending on whom you talk to yeah people I mean for me when I talk about memory I talk about the technical memory first of all that is there are SRAM cells which is computer type memory there is the analog floating gates that's memory that's of course not the memory the neuroscientists are talking about so I mean they have to be very careful so when I talk about I'm a physicist for me that's it's all the technical stuff which is the memory of course I know very well that memory is a big question in neuroscience I mean how is data being stored in in our brain and of course I mean it would be nice to do experiments that demonstrate how that works and there are also some ideas but I'm really the wrong person to to discuss this here although I can give you some some ideas I mean one thing for example I'm convinced that short-term memory is a relatively easy thing in a way because when we walk through a room for example there's a huge amount of information which we could be record from our sensors from our eyes from our proper receptors would be here and smell and whatever and that's information which is extremely important for us in order to navigate we cannot navigate without this huge amount of information but it's also information that we do not want to remember it has to go away because otherwise we would be overwhelmed with information so there's a huge amount of information in fact most of the information that we record has to live in our brain for maybe 100-200 milliseconds and then we want it to go away and so how can that be because it will probably then not be a chemical process it will just be the activity of the network that stores this information and that's typically I think for example if you look to models like the ones presented by by Wolfgang Maas about the liquid computing where you store information in the network activity and you measure the time the information stays in the network and then it fades away to me that is a very nice model of short-term memory and we have even I didn't show you we have even demonstrated that on our chip that it works so we can you just need a randomly connected network on that chip and then you send in a spike train with a certain pattern and you measure the time it takes until that information disappears and what you typically find is a couple of 100 milliseconds 150-200 milliseconds on the biological scale it's always scaled down in our case so to me that's a very interesting thing and this is also something one can study here I'm not claiming it's understood but to me that's an attractive model for short-term memory on long-term memory I don't know that's clearly something which has to do with the more permanent changes of the synopsis and the connectivity and it must be chemistry to some extent you mentioned the model was based on dynamic or exponential adaptive and integrated binos which are basically dependent on post-supply time for the parameters how many parameters do you store for this is that how it's the time for a neuron for each neuron 21 but in 21 parameters so you have the oh there are also electronic parameters like bias voltages for circuits and things like that so there's a lot you can change I can I have another set of slides I can point you to a publication if you're interested for the many of these parameters of course there are the typical neuroscience type parameters like time constants reversal potentials conductances but there are also pure electronic parameters to set up the neuron the threshold for example the threshold is is not a biological quantity I mean a biological neuron doesn't have a threshold it just behaves like this it's nicely described in Hodgkin-Huxley in our case because this is effectively an integrated fire model we need artificial values which we have to put in for the threshold for the refractory period and things like that we have to set all this by hand so how much data do you need for a neuron? well as I said it's 21 parameters for the entire wafer it's 40 megabytes the full configuration file for a wafer was 200,000 neurons and 50 million synapses is 40 megabytes which is you can easily calculate that is totally dominated by the synaptic weights because there are 50 million so it's about I don't I mean it's like 85% or so is synaptic connections and 10% is neurons and the rest is some channel setup parameters there's a paper where all this is very nicely summarized I can point you to that if you want you mentioned that this system can be used remotely how is it the access to the system who can access it? well at some point we hope everybody showing interest at the moment it's still it agrees I mean it requires some kind of a personal agreement because we have no really save user administration system for example we have many users any user can kill any experiment of any other user so it only I mean we have to somehow agree we have to do phone calls and send emails to agree on how to use the system of course on the longer term in effect already in HPP this should be professionalized so that you can be a registered user that you can reserve a certain fraction of the system and then it's yours for I don't know for a day or so you can run your experiments so all this user management is part of the platform concept which we have in HPP but it's not set up at this moment and the moment is still done on mutual agreements I mean and our only real external users was so far was Michael Schmucker who it worked extremely well with him I have to say he did from Berlin