 Welcome to the next session here on Neuromorphic Engineering. And so my name is Chi Chi Lu. I come from the Institute of Neuroinformatics at the University of Zurich, ETH Zurich. And so basically we're a big institute with about 60 to 70 people, different groups working on different areas of neuroscience. There's a group that works on anatomy, physiology, in cats and mice. And then there's a group that does physiology, anatomy on zebra finch. Little birds that learn how to sing from their fathers. And we have a group that does modeling on the data they get from the experiments. A group that does theoretical kind of studies about in general computation, computation that's inspired by the neural system. And then there's a group of us that do Neuromorphic Engineering. So basically we build electronic devices. They're based on the structure of different parts of the brain and also the function itself. And so I'm going to talk to you today about the part that's about Neuromorphic Engineering. And so this is probably a term you haven't heard before. And so here we go the opposite way. We look at the data that you collect. We look at the models that people come up with. And then we try to construct devices that capture something about how the brain works. And so the main thing that I want to say about this particular field is that Neuromorphic Engineering consists of embodying, organizing principles of neural computation in electronics. So we don't aim to imitate exactly every little cell that you have in the brain but we try to look for the organizing principles of how computation is carried out in the brain. What is it that enables us to see very well, to understand the world based on all the different sensory information that you get and also how we act on the world. And so the talk is going to be composed in five parts. The first part is about the motivation and history. The second part will be modeling the neuron in silicon. So here I will try to explain a little bit what this technology is that we use to construct the electronic circuits that imitate, for example, the neuron. Part three is we're going to talk a bit about how this particular retina that I have in front of me, the dynamic vision sensor retina, how it works. Part four is about audition in the silicon cochlear. So I have another board over here, so it's going to be demos. And then part five, if we get to it, is how we're kind of moving ahead and we're trying to look at different types of networks for brain sensor fusion. And in this case, it's a deep network which is one of the kind of the hot topics of research in machine learning. Okay, so what is amazing about natural systems which most of us already understand is that if you look at, for example, a bee that's flying around between different flowers looking for food. The bee exhibits a bunch of behaviors that are really amazing. It flies aerobatically. It recognizes patterns so it knows where the flowers are that has food. It navigates from its home to where the food is. It forages for food and then it communicates the information about where the food source is back to the nest. And so it does this for very little power and also very little mass. So a bee weighs maybe about a gram and it burns maybe about a milliwatt, right? And so if you compare it with your computers that you have, this thing probably burns over a kilowatt. And yet we cannot write or build models of how the bee can model the behaviors of the bee with the amount of power that we have here. And so this is why we're inspired by the technology that you have in the bee and we're trying to look for new kinds of computing devices that can capture this behavior. And so overall if you look at the amount of power that the bee is using, it's about a million times more efficient instead of digital silicon. And so this is what we aim to build using the same kind of silicon technology that you use in your computers. And so the reason why computing has gotten so amazing over the last 10 years is because in the silicon fabrication industry they have figured out how to shrink transistors down. And so this is the first transistor that was built in Dalat in 1947. And if you look at nowadays, if you look at what we call a wafer which is about 10 centimeter across, you can get 10 to 9 transistors. And this is back in 1997. And so today probably it's going to be two orders more than what you see here. Okay, and then there's a particular law that the industry follows. This is called Morse law. And Morse law says that the number of transistors per chip will double every one and a half to two years. And this is still keeping through even though there is a saturation to the curve nowadays. And so this is why you go to a multi-core system. And also the cost and the bit of memory is dropping about 30% per year. So that means it's cheaper to store memory. And so you can have a lot of information that's stored nowadays, which is why people can now store the videos and the pictures that you take from your cameras and retrieve them so easily. And this will probably last for another 45 years and continue for at least another 10 years. And so the silicon industry, the companies are still looking for ways to continually drop the size of the transistor down further and further though they will start hitting the physical limits. And so they'll look for maybe different kinds of materials that will take over silicon one day. Now the whole thing with the silicon industry right now is they built digital chips that go into digital systems. And that means it's all your single representations in terms of ones and zeros. And usually that's a clock running. So if you take for example the processors, so this is a simplified picture of what goes on. You have a clock that drives when information is transferred at any time step. So usually you split up your computation so that you have one part that does all the processing, if you like. It's composed of combinational logic circuits or logic gates and it consists of Ns or S inverters. And then you're going to have another piece that we call the registers and it contains the memory. So what you do then is every time there's a clock tick you're going to fetch some information from your register. It goes up to the block that is the combinational logic. It cranks on the data, gets an output data, it puts it back in here. The clock comes along, it does the same thing. And so this is how the processing works in your computer. And what about the analog part of your world? So the sound set coming when you see that's analog. It's not digital. So what does industry do with that? It has an analog digital converter for converting the analog signals that come in into a digital signal. Immediately it goes right into the digital world. There's a whole big piece of logic. And then if it has to send out an analog information again, for example, if you want to play out a sounding game, then it goes back to a digital analog converter. So the analog parts of the processing rule are extremely small compared to the digital part. But this is not true of our brain. So now if I do a comparison between the architecture of the computer and the architecture of the brain, you see many different differences. So the first one is the computer always uses a fast global clock. Nowadays it's gigahertz. And the brain uses self-timed data-driven computation. So basically it doesn't compute if it doesn't have to. So if there's no information coming in, then it doesn't spend more power than it needs. But for a clock, every time a clock comes, you're going to do something. The second thing is the computer has a bit perfect deterministic logical state. So it's always going back to ones and zeros. But the way the nervous system works is that you have neurons and synapses, as we've heard. So the synapses, first of all, takes the spikes from the neurons and that you can consider kind of digital. Then it transfers that into a charge. And so we consider that analog. And then when the neuron has built up the threshold, then it puts out another spike again, so it goes digital. So it always does this dance between digital and analog digital. In addition, synapses are stochastic, so they're not deterministic as you have in the computer. The third difference is that the memory is distant to the computation, as I've just talked about. And in the brain, the synaptic memory is local at the computational unit, which is the neuron. And then finally you have a system that's fast. It's high resolution, has a constant sample rate, and has analog to digital converters for transmitting analog information to the digital world. And for neurons, we consider them as low resolution adaptive data-driven quantizers. So they're very low resolution. Usually we consider it's about only four bits, versus about 64 bits that you get on the computer. And also, yeah, it adapts to kind of the inputs coming in. And so we'll talk a bit about that later. And how is it that the computer can run so much faster than the brain? Well, it's because of the mobility of electrons in silicon, which is about 10 to the power seven times that of ions in solution. And that's why you can run your processors much faster than the processors that you have in the nervous system. And so there are different neuromorphic systems that have been built so far. So this is a field that started about 20 years ago. In fact, we celebrated our 20th anniversary for this tele-right neuromorphic workshop that began about a year or two years after the field started in a carbon-methylactic health tech. And so nowadays you can get neuromorphic sensors, which I'm going to show two examples of. And you can get smart sensors. So basically this includes, for example, the photoreceptor front-end, and then it computes motion. So this would be like the motion chips, tracking chips, auditory classification, localization chips. We also have chips that build, that imitate central pattern generators. They are necessary for creating the rhythmic behavior, necessary for local motion. Different groups have built models of specific systems, like the bad sonar echolocation system and so on. And then of course there's a lot of focus nowadays in multi-chip large-scale systems. So basically this is trying to build chips that contain kind of the same order of neurons that you have in the brain. So you might have heard a lot about the IBM chip that just came out, the Trunov chip, where they're aiming for a million neurons. But there are also other groups there. They are aiming for that number or even beyond that. And so you can see examples of how about maybe 10 years ago you get boards that look like this. You pick them up so you can get your, not even a million neurons at the time. And nowadays you can get something. So this is from the Stanford group, from Corbina-Boerhans group, where they're building a system called Neurograde. And you can see the size of the board is a little bit bigger than the size of your CD. And so one day, probably very soon, is that you probably could buy one of these boards. And then you can run your simulations on these boards instead of running on the computer. And the nice thing also about this board is that they burn much lower power compared to what the computer burns. Okay, so now I'm going to go into some more detail about how the transistors are constructed in silicon. And so before I do that, let me just iterate one more time what goes on in the neuron. So again, the neuron, if you look at the different pieces of it, the piece that's the dendrite, that of the soma, and that of the axon. And so this is data taken from John Anderson and Kevin Martin at our institute. And so what does the dendrite do? The dendrite does summation. So it sums all the charge coming in from the different spikes. They hit the dendritic tree. The dendritic tree has cables. And so the cables have capacitance. And so the charge that comes in is integrated over time. The synapses do multiply, accumulate. That's our interpretation of what it does. So it multiplies the information, each spike coming in. It multiplies by a particular weight, and then you accumulate the sum of the charges. And then the other thing that it has, it has complementary channels that push and pull on the membrane voltage at the soma. And then finally you have this understanding that dendrites do local analog computation. And then the axon is the one that communicates the digital spikes of long distances to other neurons. And so these properties that you see is something that we can also do in electronics. In fact, we can do it quite cheaply compared to if you actually go and write a program once on your processor and you get the output. And so I wanted to just give you a feeling for how we do this. So you've seen the Hodgkin-Hodgkin model. You can see that it looks quite complicated. First the differential equation looks simple enough, but then if you look at each of the components, you can see that it's actually quite complicated because you have to multiply g by something that's raised to power three times yet another variable. And the same thing now is that you have to create something equivalent for the different variables, for m in this case, for the sodium. And then you get the little electrical circuit. Now if you have to go build this using kind of transistors and silicon technology, it's just different because we're not using a discrete resistor, like you go by the electronic straw. We have to construct the resistor out of transistors and also we have to make it dependent on the voltage difference across the resistor. So let me just give you a feeling for what this basic elements look like. So here's a symbol of a transistor. So if you look at any electrical circuit, you're going to see something like this and you can see that it's got four terminals. So it's got a source, drain, a gate and a bulk. So usually this is not even drawn at all. And this transistor is primarily silicon, right? But you'll see that later on that to get different types of transistors, what you do is you're going to use elements on either side of it. So we're using boron, austenine and phosphorous. So those are the other three elements that are used in this technology. And now if I take any silicon wafer and I do a cross-section through it, then you're going to see something like this. So you get two types of transistors. They're called PMOS and ANMOS. And you can see the equivalent symbols at the top. So I showed this before. And so the only difference is that for the PMOS, you draw a little bubble on the gate. And if I now look at, say, the ANMOS. So ANMOS because the source and drain are AN-type. And then the bulk is P minus. So this is all silicon, right? Except that we dope this part and this part differently so that now they're N plus, which means that now they have an excess of electrons, right? And this part P minus because now we have a lack of electrons, which for us means that we have holes. So that's why it's a plus and a minus. A positive charge and negative charge. And then for the PMOS, we flip it the other way. So now the source and drains are P-type, okay? And then the bulk is N-type, right? So it's just a complementary, the complementary to each other. And so now to get the feeling for how a transistor works, so now again you have the cross section at the top. And so now what I'm going to show here is the concentration of electrons on the source side, right? And here's going to be the concentration of electrons on the drain side. Because we put different voltages on a source and drain, you're going to see that for the source side you're going to get more electrons. The concentration is higher than on the drain side. And then there's something underneath the gate which we call the channel. And so what the gate does is that it's going to move this level. Oh, thank you. So it's going to move this up and down. So think of this as a barrier if you like. So the electrons down here need to get up into the channel and you need to have enough energy to get up there. And once they're up there they're going to move along here depending on the concentration of this side versus this side or depending on the field that's in the channel. And then the current, the electrons will move from here to here. And so on the drain side you can see the barrier is much higher. So there are very few electrons that jump over to this side. So in general because of diffusion and drift the electrons are going to flow this way back out again. Which in terms of current is actually the opposite way. So which current is going to go this way. Okay, so this is just an animation so you can get a feeling. Okay. So you can see the electrons are busily bouncing around. And then once in a while they hop over here and then once they're in here then it diffuses along the channel and then it falls back out here. And then the ones over here very often, very rarely one of them will hop into here, into the channel. So that's how the transistor works. And what you can do is you can change the voltage on the gate and then what happens then is you can move this up and down. So if I increase the voltage on the gate then this barrier basically is lower because this thing moves down. So this is kind of a way of visualizing what's going to happen when you change the voltages on the different terminals. Okay, so as I said before we have two types of transistors. We have the p-type and the n-type. And let's go with the n-type because the n-type is usually much easier to understand. So again we have the source to drain the gate. Usually we don't draw the bulk. The source is usually tied to ground for an n-type transistor. And the n-type transistor conducts negative electrons from a negative supply. And so we consider the negative supply is going to be the source of electrons. And so as you saw in the animation before the electrons are going to hop into the channel and then go across. So electrons are going this way but because we usually think of current they go the other way. And so the transistor is turned on by putting positive charge on the gate which produces a positive voltage between the gate and the source. And so you can think of the n-type transistor as acting as a current sink. So current is always trying to go down towards the negative supply. Now the p-type plays the opposite role. So p-type transistors conduct positive holes from a positive supply. So remember holes are just the lack of electrons. And so now at the positive supply voltage the source is going to supply holes. And so the same thing happens it's going to come up over the barrier into the channel and then it's going to fall out on the other side. And so because its holes are positive charge so it's like current so it flows down this way. And so also the way that you make a transistor conduct more holes is that you put a negative voltage between the gate and the source. And so the p-type transistor acts like a current source. With these two types of devices we can build different kinds of circuits. So these are basic units that we use. And so what is a transistor to do with what happens in biology? So if you look in biology it has this Boltzmann dependence on the concentration of ions. So here is a part of a membrane of a cell. So you have some concentration of ions on the inside of the cell and you have some concentration on the outside of the cell. And the amount of concentration on the inside of the cell sets up the membrane difference, membrane voltage difference across this particular membrane. And depending on the voltage difference that you have you also determines the amount of channels that are open or closed in the membrane. And what that does is that it changes effectively the conductance of the cell and so there are some experiments that were done by Hodgkin-Husley and they measured the change in the conductance. And what you see here for example is one for the potassium conductance. And if you change the membrane voltage across, if you increase the membrane voltage what happens is that the conductance increases on the lock scale, linearly on the lock scale. So it's exponential. Then it saturates at some point. Now let's look at what the transistor does. So now if you think of one terminal of the transistor as like the inside of the membrane where the ions are, a particular kind of ions, you see. And then you think of the outside, the drain, I say the outside, the ions on the outside. And so what the gate does is it changes the energy barrier between the ions or in our case the electrons from going to the other side. And now if we run our transistor in a region where we call sub-threshold region so industry runs their transistors in your birth threshold region then, and I measured the current as a function of the gate-to-source voltage. So the gate-to-source. Then what you see is the same kind of exponential dependence. So again here's the lock scale and you see a linear dependence on the gate-to-source voltage. And then if you run it into a birth threshold then you go into a different kind of dependence and this is the square law dependence. And so there's this similarity between these two curves and this is what we use also when we're trying to construct electronic models of a neuron. Yes. So here current is flowing from impeccable to impeccable space. Yes. So now here to run a source another drain and gate. Yes. So how to interpret? So think of the source here as the inside of the cell and think of the drain here as the outside of the cell, right? And remember the number of channels that open along the membrane of a cell depends on the voltage difference across the membrane. Okay. So here the threshold is that is related to some minus seven millivolt like this? Yes. So this threshold is something that comes from the fabrication of the transistor. So it's got nothing to do with the threshold here. Okay. Right, yeah. So yeah, I should point that out. So in the rest of the industry that builds circuits out of a birth threshold circuit, this is how you get your ones and your zeros, right? So if I'm above the threshold, it's a one, if you like. If I'm below the threshold, the transistor is shut off. So you can think of like a switch. If I have to binarize the whole thing, if I go above one volt, switch is closed. If I go below, switch is open. For inward current, we have to reverse the potential in source and drain. For example, this is for potassium current, K conductance, potassium conductance. Right, right. So for sodium conductance, here in transistor, you have to apply reverse voltage. Yeah, so we don't, since we only have electrons and holes, those are the only types we can play with. And so the way the potassium and sodium flows is that we'll use one of these transistors to imitate what's going on in the model, in the Hashkenhaxin model, for example. So if we need a current sink, we're going to use an n-type transistor. If it's a current source, then it's a p-type transistor. Yeah, right. So this is how we can get the equivalent circuits, because we don't have different kinds of electrons. Yeah. If you combine n-mass and the drain mass. Yeah. So how to interpret it in terms of biological land? Yeah, so I'm going to show some examples of circuits that implement the neuron, and you can then maybe get a better feeling for how we do the equivalent models in silicon. And so here's a picture of a synapse that comes in, and then it changes the conductance on the postsynaptic side. In this case, let's say it's sodium. And so what happens is that when the synapse releases a neurotransmitter, the conductance on the other side increases, and so you're going to get current coming in that charges up the membrane. And you could have inhibitory synapse that when it's activated, it's going to change the conductance on this side. And so in this case, you're going to get a current sink. So basically, now the memory potential is going to be discharged because of the flow of ions. And so what do we do? Well, we do this. So we take the transistor here now, which acts kind of like a conductance. And we can do this because we're offering the transistor in a different place than what industry normally operates. And if you want to charge up a membrane, so this would be the equivalent of the membrane, then we would change the gate voltage over here so that you're going to get holes coming in, and now it charges up the membrane. So basically, whenever you get a spike coming in that now goes to a synapse and then it's going to change the conductance. What we do is whenever there's a spike, we change the gate voltage here so that now I'm going to get a current flowing. And the same thing for the other side. So here we use an n-type transistor because it's better as a current sink. And then we modulate the gate voltage over here. So then we can get the discharge of the membrane. And so it's showing you that transistors are not inherently digital. It's just the way that you build the circuit so they act digital. Between 0 and 1. Yes, right. So if you run it like the way industry does it as digital, then you connect them up. Here's your gate. Right here would be a voltage here that says it's a 0 and then the output would be a voltage that says it's a 1. Using the sub-threshold. Yeah, so generally our circuits are a sub-threshold there. I mean you can imitate a model of the neuron also using above-threshold operation. And this is done, for example, with the kind of circuits that the Heidelberg group does. So you can do the equivalent also. But we like the sub-threshold because the sub-threshold gives you the exponential properties. And also you can see this is the area where the currents are very low so it gives us the low power dissipation of circuits. Okay, and so here it's just a bit like showing, so I'm showing again that piece from the previous slide and now I'm going to tag on the leak conductance. So this would be as if you have a soma. And then you're going to put in a capacitor that imitates the capacitance of the soma. All right. And so this is showing you that we can imitate the complementary voltage and neural transmitter gated channels that you see in the biology. So there was actually a circuit built by Rodney Douglas and Misha Maho back in 1991 and he made the cover of the nature paper. And they built sort of a rough, a little bit an abstraction of the Hodgkin-Huxley neuron. And so you can see here, for example, is what you would get if you do the recordings from an actual biological neuron. Actually, do you know which one is the biological neuron and which one is the silicon neuron? So one of them is from the biology and one's from the silicon. Not really. Sometimes you can get hints. No? So, okay, so maybe I should tell you what the curves are. So what you do is you inject current into the cell, so you do a step current, right? And then you look at the memory potential of the cell. Okay, so same thing here. Step current, memory potential of the cell, right? And then you see spikes, right? So you can see the memory potential first increases and then a spike is generated. So no idea. Really? Which one? The right one. The right one is the model and the left one is the silicon. Usually the one that's not so noisy, you should know. So the left one is the silicon. Yeah. And the right one is the model. It's the biology, yeah. That's true. And partially sometimes you can tell because the spikes are more regular. Right. That gives you a hint. On the right side you have even numbers. 0.8, 0.7, 0.6. I see. That's all you're telling. Yes, that's very good. Yes, but in general you can fool someone, right? So you can get very similar kind of firing outputs from the silicon neuron versus the biological neuron. And this one I won't talk about, but really it's just showing the adaptation of the neuron also. So if you look at the inter-spike interval after the first spike, and for different amounts of current, then you're going to get more spikes as you increase the current. And then if I now look at the second inter-spike interval, which basically is from here to here, then you get a different what they call FI curve. And then if you look at the third one, then you can see it's starting to adapt. So basically the time between the spikes drops as time goes on. So this is called adaptation. So they could even imitate the adaptation that you see in the hatching-hatchling neuron, in other words. Okay. Of course, for all the big systems that you see, we don't build hatching-hatchling neurons. There's a reason for that. It's because the number of transistors that you need to imitate every one of these ions, right, and also the variables, the inactivation variables and activation variables cost a lot of transistors. I mean, in this case, I think they only imitated three ions and already cost about 50-something transistors, which is a lot. So then we cannot build, you know, for the same chip area, we cannot build lots and lots of neurons. And so what we do is we revert back to the more simplified integrated fire model that Hill and Stein came up with. So you can see there's only one differential equation, right? And so all you require is that when the voltage exceeds the threshold, then you make a digital pulse. And there are now new versions where people also add in, like, a linear leak to make it closer to the actual biology. Okay, so all we need is something that does integration, something that does comparison, and finally something that does a reset. Those are the three components that we need. And so how do we do this? So this is the equivalent circuit. So here's our neuron again. We're going to inject current. So for us, this is a current source. So this is just a transistor, like a show, right? And then this is the capacitor that, if you like, of the neuron. And then when the voltage on this capacitor increases because of current coming in, it exceeds the threshold of this comparator. What this comparator does then is it generates a digital pulse. So this digital pulse comes back. And then if you like, it acts like a switch. And so it discharges the voltage on this capacitor through this path. And sometimes we'll put in extra transistors so that we can actually control the rate of discharge. And then to imitate the positive feedback that you see in the generation of a spike, all we do is you put a little capacitor from the output to the input. And so this is something that says, as soon as this is starting to rise, you're going to couple this change in voltage back to the input side so that it really does now go one way, right? So it doesn't just hover somewhere. Okay, so we have a part that does the integration, the part that does the comparison, and the part that does the reset. Okay. And so to see if we can imitate something like this, you put in a step input of current, and then you look at how the bearing potential first increases, and then it makes a spike, resets, increases, resets. So here's a movie to show you one of the data from one of these silicon neurons, if you like. So what you're going to see on the bottom here, so this is a set of Poisson distributed spikes, right? It's inverted, so it should be the other way. And then what you're going to see here is the memory potential that increases as each spike comes in, and you can see that as soon as it goes up to threshold, then it fires. Okay, so you see a spike. So here you see the output spikes. So before there was the input spikes, and so you see builds up, fires, builds up, fires, builds up, fires. All right. And then how do we do with synapse? Well, synapse can be of different types, so you can do the simplest type of synapse to be the cheapest in terms of transistors. And so all we do is, in series with this current source, we're going to stick another transistor, which acts like a switch. So every time a spike comes in, we close the switch, dump some charge, and then as soon as the pulse goes away, then we open up the switch again. No more current coming in. That's all. Of course, you can make much more sophisticated versions of the synapse where you also have time constants. And usually then that ends up being about five transistors. Okay, so then finally, how do we put this different neurons together and wire them up in the way that we would like the network to be when we actually run simulations on the chip itself? And so if you look at just the top where in the actual biological network, you're going to see neurons are kind of talking to each other in some way that we could never imitate on silicon. And partially also, we have a problem in that silicon is two-dimensional. It's not three-dimensional. And so that means that we cannot put wires going up like this. They're always going to go like this on the plane. And so the way that we solved it is that we do all the wiring virtually. All we do is we make a chip with lots of neurons. So now this is kind of the cell body of the neuron. Right? We have synapses. And the wiring is done by putting addresses on each of the neurons. And then we send spikes in, right? It causes charge to flow to the soma. And if the soma makes a spike, then what we do is we take all the spikes onto a common wire, just one wire. And then we actually not sending the spike, but we're sending the spike address. So basically, we're saying that, for example, in this case, where you see neuron number two fired first, we put our address two. Neuron number one fired next, we put our address one. Neuron number three fired, address three, and so on. But this is the same physical wire going out from this chip. And the reason why we can do this is because silicon is extremely fast. We can do this kind of transmission in nanoseconds, whereas our neurons are still running in milliseconds. So we can send off over 1,000 spikes in the time between two spikes from the actual neuron. And the other thing that we do is the whole thing is asynchronous. So to follow kind of the style of processing that you see in the brain, all the circuits are asynchronous. It means there's no clock that runs on the chip, right? So only if it interfaces to another digital chip then there could be a clock again. So now if I take asynchronous information that comes out then it has to go back in again into another chip. Or you can take the same information, go to some sort of mapping table, and then come in and drive the same chip again. Some neuron on the same chip. So if I want to, say, put a wire between neuron three and neuron two. So every time neuron three spikes, I'm going to see the spike address from neuron three. It goes to some mapping table, which I'm not drawing here, but it's just a memory block. And then it says every time I get a spike from neuron three, please send a spike to neuron two. So what it does is when it sees neuron three arrives, now it's going to go and send a spike to neuron two. And this is how we do our wiring. And it's much more flexible than if you actually put a physical wire between here and here. And pretty soon you're going to fill out the space with wires. Yeah, it's multiplexing, yeah. And so on the other side, if you imagine another chip where neuron two comes in, I can come here and I can even add more information and I could say, you know, stimulate neuron two with either an excitatory synapse or inhibitory synapse. So there's a lot of flexibility in what kind of information you should give to the target neuron. And so this is how all the multi-chip systems are built nowadays. And this particular protocol is called the address event representation protocol, or short we call it AER. So you see this word, or this acronym show up a lot in the neuromorphic engineering world. Okay, and so these are some of the big systems out there. So that's the one from Spinnaker, which is a digital, a set of digital chips. And it's got each of these chips. There is an arm core within it. And then you have the one that comes from brain scales or the human brain project. And here you're talking about wafer scale numbers. So basically on one wafer you're going to have, you're going to populate everything with chips. Usually what we do is you send your design to someone who collects designs from all over the different labs in the world that want to have your chips fabricated on a particular process. And then you get just a piece of it. You don't get the whole wafer. To get the whole wafer that just costs like, I don't know, at least a quarter of a million dollars to get the whole wafer back. So usually you're sharing the cost of getting your chips back and then for us, for a 10 millimeter size chip, it will cost us maybe about 5,000 euros. So it really helps the people who are prototyping chips. And so their intention is that they want to build a simulator where they're going to have wafer scale systems where now you can run your simulations. And they also run their circuits above threshold so that they can run faster than real time. So basically you can have very fast simulations. And so we go with a different philosophy because we want our systems to interact with the real world and we don't have to run more than real time. So we run at real time. And then I talked about the one from Stanford called Neurogrid. And so you can see there are a million neurons kind of distributed over the different chips. So this is a chip here. See those different ones. And some of them are additional chips that you buy commercially and some of them are custom chips in the sense that we build them, right? We created the circuits. We send them to a foundry. A foundry sends the chips back and then we... It runs in real time or faster than real time? Yeah, so for us real time means it's the time constants of our world, right? You know, like when I move. So I'm not talking about something extremely fast. So time constants of the natural world, which is called real time. Yes? There are several definitions for real time. If you talk about robotics applications, typically you have the distinction between hard real time and soft real time. So real time means wall clock time. And if you're computing is in a small or in a well-defined relation to the wall clock time, then you're approaching real time. And hard real time is if you can ensure that you will be in sync with the wall clock time. For example, if you have a bipedal robot or you have a high throughput machine, then you have very thin time slices of 10 milliseconds in which you have to finish your computation, otherwise you're chopped off. And that would be real. And in that sense, for example, the Heidenberg hardware is about 10,000 times faster than the actual world. Right, it doesn't have to run so fast, but it runs much faster than it has to, right? Yeah, and this is the one that just came out in science just last week or a couple of weeks ago now. And so it's from IBM, it's called TrueNorth. And they also have a million neurons and also the neurons are digital, and synapses are digital, but the whole thing is asynchronous, right? So that's the difference between this one and this one. Oh, sorry, this one. So this one, the chips are synchronous, but the communication of spikes between chips is asynchronous. So you see all these different kinds of combinations that people are trying out to see which is the most effective solution in the end. But all of these systems are basically aiming for very low power, right? Okay, so one way of using these chips, you might ask, so what am I going to do with these chips, right? Who's going to, how are we going to use them? So one of them is that you can configure them so that you can run the kind of networks that you're running on your laptop so that you can get your simulations faster, right? Because now you're really running on the physical substrate. You're not going in and programming a differential equation because these elements are the differential equations, if you like, right? You have the dynamics of the differential equations that you're interested in. But what we're also doing in our institute is that we're not trying to do the neuroscience modeling alone, but we're trying to look for computation primitives. So we don't want to be slavish in some sense to what you see in the nervous system, but we want to look for the organizing principles of what the nervous system is trying to do, except it does it, of course, with its own units, right, which are the neurons. And so one thing that is used actually quite a bit in the neuromorphic world, which really comes about because of the kind of connections that you see between neurons. For example, these are neurons that have been labeled by John Anderson at our institute. And so basically, you look at the statistics of the connections between neighboring neurons, right, and also how it talks to other neurons. And then you come out with some model or some primitive that you can use over and over again. And because we know there are about 80% or 90% more excitatory neurons than inhibitory neurons, one of the primitives that people have come out with is something called the Winni-Tecal. So in this case, what you see are excitatory neurons that talk to inhibitory neuron and the inhibitory neuron goes back and inhibits all the excitatory neurons. And to also imitate the recurrence that you see between the neurons, then you can put in either neighboring neurons or self-excitation or second-order neighboring neurons and so on. And so this by itself is a primitive that we use in our circuits, right, to do the nonlinear amplification of the inputs coming in into the system. So you can actually choose, you know, one of the inputs rather than always processing all the inputs as if you're in a linear system. So it's a great nonlinear primitive that we use. Okay, so I'm just going to show you an example of what we get on this chip. So here you're going to see a set of 16 by 16 neurons. You're going to see the output firing rate off the neurons, right? And so what we do here is that we give all the neurons the same input, but one of them we give it a slightly higher output. And so now from the chip itself, you can see the system is going to, first of all, dynamically figure out which one has the highest input and then at the end it's got only one guy, right? Which is the one that receives the highest input. Okay, so the next thing that I'm going to move on to now is a couple of the sensors that were built in the institute. So the first one is called the Dynamic Vision Sensor, DBS, or in other words, it's also a silicon retina, okay? So I'm going to quickly talk about the retina so you can see how we've taken a particular piece of the retina to model on silicon, right? So if you remember we have the eyeball, the light comes in, falls on the back of the eye, which is the retina, and then you have a certain set of cells, right? So the light basically goes through all the way to the rods and cones at the back, and then the rods and cones synapse onto the horizontal cell, the bipolar cell, then you have the emigrant cells, and then the gangon cells, right? So everything here is analog, except for the last part, which is digital, right? Because this is the last part that sends out pulses to the rest of the brain. And we also know that there's 10 to the power of 8 analog photoreceptors, which then get turned into 10 to the power of 6 gangon cells, spiking outputs, and the other thing that the retina is very good at is very good at responding over a huge order of dynamic range, 10 to the power of 9, so you can see from moonlight to sunlight, and of course of interest to us is the fact that the power consumption is only about 3 millivolts. So it'd be cool to have a device, an artificial device that holds all these properties. And so your camera doesn't work the same as the eye, right? Your camera takes pictures, and humans like pictures. And so it's true that the, so basically the specifications are different, so there the camera industry is interested in making high resolution images, if you like, and getting the pixel area, or if you like a size of the photoreceptor, to be extremely small, right? So it's not concerned about getting, you know, this very high dynamic range, and not necessarily about power consumption, of course it's important also. And so why is it that the retina can get such a huge dynamic range? And so this is what we're interested in looking at, and so basically this huge dynamic range comes about because of the rots and cones, and this we already know back in the late 90s when Norman Pro did these experiments on the turtle cones, and so this is the plot that you get of the response of a cell as a function of intensities, and you can see that the cell responds well over six orders of intensities. And so each one of these operating curves that you see comes into play depending on the background intensity. So if I'm sitting somewhere here where it's very bright, right, and now I present a step contrast up or a step contrast down, right, I'm going to ride along this curve. And if I change my background intensity suddenly so that I go from bright to slightly darker, the cell's going to adapt so that now it's at a different operating place, and then now if I do a step contrast up or step contrast down, now I'm again on a different curve. So this green curve has a much higher gain than the red curve, right? So the red curve is your city-state response as we call it, and the green curve is the transient gain response. So basically you're always adapting the background intensity and then you have a very high gain for transient changes. This is how your eyes can see always such a huge difference in background intensities. And so for the cameras that you buy, it's just this curve, right? Let's say it doesn't adapt to the background. Oh, right, and the other thing is that, the other thing that's of course interesting to us is that if you look at the response, it's a function of the log intensity and logs are very nice because if I multiply two numbers, like, no, if I take the log of them, then I'm just summing up the two numbers. And the same thing if you're doing division. So that's why we also like the log encoding. And so let me just move on to now what we do. So here again is the slice of the redness. So we're going to draw some cones by polar cells and ganglion cells. So in a DVS, we're just trying to model this piece, this piece, and this piece. So we don't model the horizontal cells. We don't model the American cells. There are redness that's been built that does this, but because we know that the more transistors that you have for your circuit, the worse your picture is going to look. And this is because of something called mismatch, which is that every transistor that you have, the properties are different. It's because of the fabrication process. I can build two transistors. I can give it the same gate voltage input coming in. You're going to see differences in the output of the transistor. So it's like a variation in how it responds to the same input. And so because of that, we try to have as few transistors as possible. And so in this particular DVS retina, which was done by Patrick Lichtsteiner and Toby Delberg. So what they did is they say, look, I'm going to model just one pathway of the retina, the transient pathway, where I look for changes. I'm not going to model the sustained pathway, which is the other big pathway of the retina. And so what we do is you take the photocurrent coming in onto the chip. You do a log compression. And now what you do is you throw away the DC because you're interested in DC changes, right? And the way you're doing is that you put a capacitor right on the output of the photoreceptor. So basically it doesn't respond to DC. And then we go to a high-gain amplifier. So basically we're going to look for changes. And then we're going to amplify up these changes, right? So that now we can take the amplified version and we're going to compare against two different thresholds and on threshold and off threshold. So what this piece over here is doing is that it's going to look for contrast changes because we've already thrown away the DC, okay? And then it's going to look at whether this contrast change is increasing or decreasing, right? So in some sense it's imitating what the on and off ganglion cells are doing and then it sends it out as a spike, okay? And then whenever a spike is made then what we do is we come back and we reset this input so that we always start from the same point again, build up, you know, fire. And it's a bit like a neuron, if you like, except that it's an asynchronous self-resetting neuron. And so now what you do is once you have the pixel that works beautifully because there are many ways, you know, many ways that you can make the circuit so you can get something that looks like the output of an on and off ganglion cell, is you stick them together into some array size, in this case it's 128 by 128 because we usually are limited by the cost of the silicon area. And then you can see this is the layout of the chip and this chip is 6 by 6 millimeter squared. The transistors are drawn because the way the transistors are made is that you have different layers or different steps that tell you what you should do to the silicon. Okay, and each of the steps are usually specified by a particular color. So it's kind of art for me. If you look at the details, it really looks like you're drawing, you're making art. But each time you see a yellow piece, this is a transistor because the gate is usually drawn as red and then the source of green. So when they cross they become yellow. Okay, so in this pixel, you have the photodiode area which is where the light comes in and gets converted into electrical signal. The electrical signal is sent to a set of circuits that imitates the bipolar and the ganglion cell. And then the digital part over here is the one that now sends up the spikes and then transmits to the outside world. And so we have an array of this and then we have what we call AER circuits that take the spikes and then send out the information in an asynchronous way. And then we also have blocks which we call bias generators. And what they do is they supply the parameters of the circuit so that you can operate the circuit in the right place. And then now you get everything worked beautifully. What you do is you get the chip back and then you put the chip. Usually it comes back in a package and then you put it on something called a printed circuit board. So here's an example of a printed circuit board that has a custom chip on it. In this case it's the cochlear which I'll talk about next. And then you hide everything in a nice beautiful box. And then for this case, the retina you put a lens in front of it and then it's really like a camera. And then now you take the spikes so the power is coming in through here to power up the chips and at the same time this USB connection is also used to send out the spikes back into your computer so you can view them. Okay, so let me see if I can do my demo. It's always dangerous with demos. All right, so first thing is where is my... Ah, there we go, cool. Okay, so now remember this is asynchronous spikes coming in but because we like to look at pictures what we do is we bend the spikes and then we present them as if they're frames. And so what you see here first of all, these are like noise spikes. So whenever you see grey means no spike. Whenever you see a black those are off spikes. When you see white, they're on spikes. So if I put my hand in front of it see I see just the outline of my hand and so of course depending on the direction in which I move my hand you're going to see the leading edge here will change from white to black because it's either the contrast increasing or decreasing. And you can only... So the thing that's nice about this retina is that if you don't move you get spikes except for kind of what we call the dark current spikes the ones that... there's some pixels that spike all the time and then if I move then I see kind of just the outline of the object that moved and so the information that comes out is very sparse compared to if you take a camera and then you are grabbing frames all the time. Right? And let me see if it sees you. So there we go. And then now show... So I usually told these the one that does the demo so it's much more sophisticated this. So to show that you can see even through sunglasses. So remember I said that you can... the retina sees very well even though we're under different lighting conditions and it's sensitive when you do contrast. So let's see if this is true. So here's the sunglasses and so you see my hand behind the sunglasses look about the same as the part outside of the sunglasses. So yeah. So this is also very cool. Do we have a product order from NSA? No, not yet. And then the... and this framework that you see here is called J-A-E-R. So Toby worked on this. It's a Java based software project that it's free. It's run on SourceForge. And so if you get your own... if you buy a sensor or you build your own and you use the same kind of communication protocol, then this project basically allows you now to write filters. So basically you can take the spikes now and then run certain kinds of algorithms. I'll talk about them in a second so that maybe you can look for orientation. You can pretend these are cells firing spikes, right? And then how can I, for example, using the timing information in the spikes now determine when I see a particular type of orientation? So the thing that's different from kind of the frame-based world out there in machine vision is that the information coming in is not framed, right? There's no clock running. So basically when something moves, as soon as I move, I know when it moves, right? So I can get great timing information with this. There we go, like him. All right, so here... So this demo is much more impressive when I show you the video later. But the idea is that even if something moves very fast, this retina will see it, right? So usually your camera runs about 30 or 60 frames a second. So it's going to miss things that move very fast. And so the way we go usually is that when we're missing some tool that we don't have, we go to a hardware store, we go to a toy store. In this case, we have a little SpongeBob that if you press it spins and what we did was stick a little disk that we made in front of it and we put a little dot on it. And so now if I ask you to see the dot to follow, it's very hard, right? Yeah, so let's see if the render sees this. And usually you have to focus it well also so that you can see the thing. And so now what I'm going to do is I'm going to lock the data. Okay, so I'm going to run for a while and I'm going to stop it. Okay, you can save the recorded file. And now you can play it back. Okay, so now it's playing the recording. So this was just from before. And now let's see if I can slow it down. One second. Oh, I'm actually going the wrong way. All right, I was speeding it up. And now you can see, so basically now I'm plotting the spikes for a much smaller bin time. And I think under... Yeah, in this case, you don't really see the dot. So it really depends on the focusing and also the... which I didn't have much time to do. So what I'm going to do instead is I will play you what it could look like, which would be much better. Hold on a second. I lost my... where is it? Ah, there we go. Is it this one? Yeah, there we go. Okay, so same kind of experiment. And now I'm slowing it down. So you can see... now you can see that particular dot as you decrease the frame time, right? And because there's no actual frame time, because there's no clock running, you can just bend it to kind of the frame time that you're interested in. And the other thing is you can also plot it in terms of space and time. And if you look at it in time, it's actually a helix, right? And so it really tells you that you could actually just look for how, you know, a certain pattern involves in time, the motion of the stimulus. Okay, now... I'm not sure if I... Oh yeah, and the other thing that's also very cool is that you can use it to train students, if you like. So I'm going to kind of look at what each of... say a particular cell here, right? The one with the blue dot is listening to. And so you can hear most of the times it doesn't do anything. So you see now when something moves across it, the cell was spiked, right? And so you can use this as a way of also training students to plot receptive fields. So we have another device that this is what it's used for. Okay, how much time do I have? Go back to this. All right, so here's an example of how you can use the output of the spikes. So you probably know about this Hubel and Wiesel model where you take the output of simple cells, right? And then you combine them into another simple cell. And so the idea is with all the stream of spikes coming at you, how do you know which spikes are the most useful spikes? Which spikes convey the most information? Because now it's not like a clock system where at the time I was a clock, I present the right input, get the output right. This is really living a world where things are hitting you all the time. And so one way that you can look safe for an orientation, so this is the case of a simple cell that has a particular orientation preference, is that you take a set of LGN cells, right? You line them up in the, or you don't line them up. You just select the ones that line a particular orientation, and then you say whenever any one of the spikes, you know, I'm going to send an output to this guy, right? But of course you don't know when they spike. They could spike anytime, right? And so why not just use the timing of the spikes that come from this? Because if the right orientation comes along, it's going to hit everybody at the same time. You're going to see spikes that all come within some very small window. And so this is what we mean by, you can actually look for the orientation now by the DVS event synchrony. And so the way it works is you send the DVS events, and now we do this in software, not in hardware yet. We can store the timestamp of the spikes because we have digital chips that timestamps it before it goes back into the computer, right? And now we use the map of the last event times, and then we use it to now drive, say, four orientations, right? So you're looking for the diagonal ones and also the horizontal vertical ones. And so what you do is every time a spike comes, you look in the neighborhood this way, this way, this way, or this way, and you look to see if the last timestamp of the pixels in that neighborhood are very close to you. And if, for example, if this one, say this pixel gets a spike, if I look across here and I see, oh, other pixels also got a spike in the last maybe few milliseconds, then it must be a horizontal orientation, right? And then from then on, you can send out even more spikes, right? Except now your spikes carry also the orientation label. Okay, so it's not just a spike that has just a single bit, one or zero, but it also carries information with it. And so now let's see how this works. So I'm going to show you what happens if I take a little square that's drawn, and then I'm going to have different colors for the different orientations, okay? And then I run the algorithm as I've just described. So now, so I move the square around, right? On a piece of paper, so this is Toby doing an experiment. Okay, if I run it past the cell, then the cell spikes. And then now what I do is I'm going to fast forward to this part. So now I'm going to run this filter, and in this case the filter is going to output spikes. They are labeled with a particular color depending on what it thinks their orientation is. So this is him zooming in and out, Toby's zooming in and out with this experiment. So because we know this is vertical, so vertical orientations are blue, so it gives you the vertical ones, right? And then when you move it more, then you can also see the horizontal ones, right? So basically you get your orientation just by looking at the timing. Yes, you sorry? Did you use some prediction? No, so in this case we didn't, but there's some research going on where we also look at the probabilities, right? Because the spikes come in are slightly noisy. So the probabilities probably will help you determine even better what the actual orientation is, yeah. And so you can do things like if you turn it, then you can see that it looks for the diagonal ones and they're lighted with the appropriate orientations. Okay, and the other thing to show that because of the sparseness of the output, we can do very interesting sensory motor tasks. So here's an example of what we call the robot goalie. And so the idea is that, again, constructed at this tele-right workshop in two weeks, finding parts from the hardware store. So you take some box, you have a piece of wood that's going to be like your arm, and then what this arm does is it has to block balls that you shoot at the goal, if you like. And then the DVS looks at the ball and it has to infer first of all from the spikes coming in where the balls are, right? And second of all, it has to infer what the speed is of the ball so it knows when it should block it, right? Okay, so here's the live demo. Somebody's flinging balls and see it's just, I mean, it's amazing, right? So Ace Hardware, this is a leakage. And so now if you look at the output of the camera, then you can see it's much slower, everything's very fuzzy. But if you look at the output of DVS, then you see spikes generated every time, you know, the stimulus generates a spike. Oh, she's a real action. So it's Toby. So the way that you can tell where, or what is a ball is that you look for clusters of spikes that are coming together in a local neighborhood and that's a great way of tracking balls. Okay, all right. And so basically this system achieves 550 frames a second and it has a three millisecond reaction time so it's extremely fast. It's great for robotic systems where usually you need to have a fast reaction time and then it runs only on a 4% processor load. So it's just demonstrating that the computation is very cheap because of the fact that your output of your retina, you know, is in this different kind of representation. And then, yes. The output of the DVS is computed in what normal way. Yes, so the orientation you mean, or the cluster. Yeah, it's computing, yes. Yeah, right, that's right. So we went to conventional means when we started looking algorithms because it just much faster to prototype and to understand things. And we're in the process of now moving it into hardware. Right, so the lesson we learned, I think in the first initial ten years, was first the technology wasn't there yet. And then second of all is that to try and run experiments on the devices that you made, right? I mean, they're going to have a lot of mismatch and variances. And so now you're running yet on some other system where you don't have good models for them. So the best thing was to take the output of sensors, run it on traditional computing platforms, figure out what the algorithms should be like and then transfer them into networks of spiking neurons. That's what you're interested in. But commercial devices don't have to have spiking neurons. Right, you can be inspired by it but you don't have to build a kind of a full dedicated model of it. Your chips, once you program them or once you print them, they have a specific function. They're not like FPGA. Yeah, you cannot configure them, right? So one thing that we're doing is, for example, the output of this is going into an FPGA for some of this future extraction algorithms that you just saw. You cannot configure, I mean, this is a sensor. So you're going to build it for the best specifications ever, right? You understand what you want from it. And so that's why this can be custom, right? But for the rest of the stuff where you're still exploring or you want to configure, then you can go into FPGA. If you were to make this configurable, this chip will be gigantic or your fill factor, which is the amount of photodiode area within the pixel, the percentage, it's going to be so tiny that basically you're going to have, you know, maybe eyes of it are like this, instead of like this, which is what we want. Yeah, so the other sensor that I wanted to show off is so you can see this only models the transient pathway. It doesn't show the other pathway. So people like pictures. So what we figured out last year is that we could take the photo current that comes into the circuit, you know, still have the rest of the circuit that does the detection of transient changes. And then we can convert that current into kind of something that looks like the output of your camera. So you can actually get pictures along with the spikes. Even though the pictures are transmitted like spikes, but they're transmitted on a clock. And so I'm just going to show you a demo here because I don't have the device here. Where are you? Okay. I'm just going to show you one picture. Where is it? Ah, here we go. So basically from the same device like this, right, you can get the normal picture that you see, right, like a video and then on top of it you see this green and red spikes. So it shows you only the parts that move. So where is that? I think I, yeah, there we go. So you can see when you have frames, you kind of skip, right? Because you can only see what you see at the clock. But with the DVS, you can, for example, even plot the activity rate of transit changes, right? You can see movements are much smoother than if you had frames. And so this is our latest device that we have. So there we're doing cluster tracking. So even though, you know, which spikes belong to a person. And the idea for also having this is first of all, it's good to have pictures or some sustained output when you're trying to recognize objects. And the second thing is this will be something useful also for all this machine vision algorithms that are out there, right, that people are using. You see how you always catch all the activity in between, right? Because, I mean, basically a frame from a camera is very redundant. You can see most of the picture is not changing at all. Why should you keep on sampling the same thing over and over again, right? So just pick up the things that you need that are moving. All right. So let me move on next to the cochlear, which after you see this, maybe it's not so impressive as the retina. OK, so we wanted to also have another sensor so that we can study things like sensory fusion. Also hearing is yet another input that's very important. And so just so that you can understand how we came to the silicon model, just as in the retina, is that first just to give you some idea of how the hearing works. So when you hear sounds, it causes pressure changes which then causes your eardrum to vibrate. And then it goes to the middle ear with the three bones and then the last one hits if you zoom in. Then the last one basically pushes against the oval window of the cochlear, so spiral-like organ, right? And then now if you do a cross-section through here, you're going to see that it's got a fluid chamber. So basically you don't see this, but basically this is the same fluid chamber on both sides. And then inside you have this part which is called the organ of corti, right? And yet there's another fluid chamber in here. And what sits, and now if you take this set of cells and you zoom it out, right? What you see is first you see the basilar membrane. So this is a piece of membrane that moves whenever the fluid moves because of the bone pushing on the fluid. And then on top of the basilar membrane you see a set of cells, right? So you see the inner hair cell and you see the outer hair cells which basically do gain control. And so you get a set of the cells, usually there are about 3,000 of them, at least for the human ear. And then what the inner hair cells do is they sense the vibration in the fluid and then they change it into an electrical signal which then causes the spiral ganglion cells or the auditory nerve fibers to spike and then the spikes are sent out again to the rest of the brain. So again in this sense you see that all the processing is analog and then finally it's digital, okay? And so what we want to do now is to figure out what is it that we can keep to make something so that we can build a silicon model that captures kind of the main properties of the cell, of the organ. So in this case is, so first we look at the basilar membrane. So if you look at the abstractive model of coca, usually people think of it as the piece of membrane that sticks in the fluid chamber. The membrane is shaping in a particular way so that you pick high frequencies up here because it's thinner, I mean it's narrow and thicker and then here it's wider and thinner. And so if you play a pure tone coming in it's going to set up a trailing wave so what you see is you're going to see that the membrane vibrates and at a particular place it's going to vibrate a lot and then it's just going to dissipate the energy right after that. So there's some place along the membrane that's going to vibrate with the highest amplitude. And so what we do is electronically is that we think of this, because the membrane is sensitive for different frequencies we think of it as a filter, right? A filter that picks up different frequencies. And then what we do is we cascade them because they kind of belong in a row. And then we tune the filter so they're sensitive for different frequencies. For example, for high frequencies down here down to low frequencies and then we also try to get a shape in the frequency domain that's similar to what people record from the membrane. And then the next thing that we have to model is the inner hair cell. And so for that we go to the physiology that you get from people who do recordings in this area. So for example this is from Hutzpeth and Corey and what they do then is they look at the response of the inner hair cell as a function of the displacement of the membrane. And what you see is that if you displace it in one direction you get a much bigger response. If you displace it in the opposite direction you get a much lower response. And for us in electronic terms we think of this as a halfway rectifier. So it basically means that we only look for the positive part or one part could be positive or negative but in this case it's positive. And then we try to minimize the response to the negative part. And we can build electronic circuits to do this. And so these are some recordings that were done by Palmer and Russell that show this a halfway rectification. So playing at two different frequencies and showing that it responds more on one side versus the other. Okay so and then for the spiral ganglion cells people do similar experiments. Again they run different frequencies into the cochlear and then they look at the amplitude that's needed to create a preset firing rate for the cells. And so again these are for spiral ganglion cells so auditory nerve fiber responses. And so again it looks like a filter. And we call this a band pass filter because the cell responds best to a particular frequency and then it drops off on both sides of it. Okay so I won't show this. So let's just start here now. So again the same picture. What is the electronic model? Well it looks like this. So here the input sound comes in with disregard all the responses of the middle ear. It's just straight coming from a microphone, goes to a set of filters and the filters are tuned for different frequencies from high frequencies down to low frequencies in the lock space. And so basically imitating from high to low. With some shape that approximates what you see from the actual experiments. And then the inner hair cell, how do we do this? Well we have an electronic model so basically these are now the responses of the different filters. We take each one of them out. We go through a halfway rectifier which is a bit like a diode for us. And then we go to an integrated fire model like the one you saw earlier. And then we put out spikes. And then the spikes are now transmitted out to the outside world as AER spikes. And then we can bring them back into the computer and display them in a plot here where the input sound comes in here and then you're going to see responses from the different channels of filters sensitive for different frequencies. In this case if you make a harmonic sound because the spike is generated on a particular phase of the cycle you can actually see this repetition in the pattern. And so from this you can actually extract out the frequency of the sound and display it in here. And because the tuning of these filters are very broad that means that you're going to see a set of spikes firing. I mean a set of channels firing instead of one channel. Okay so here's the very quickly the demo. So here you're going to see the board. So we have on-board microphones. We have microphone jacks so you can stick in your own microphones if you want different baseline spacing. And then we have a normal cochlear so like your two ears. We have pre-amplifiers that do global gain control so what that means is that if it's a soft sound we try to make it louder. If it's a loud sound we make it softer just like what happens in your ears. And then we have the custom chip so you can see the die size is very small it's about 10mm squared sits in this big package because we have to take the signals in and out of the chip. We can drop many of the signals. And then we have a bunch of digital chips that you buy commercially and what they do is they basically collect the spikes from here and then they time stamp it and then they send it off to the USB. I mean to the computer to the USB connector. And so now what I'm going to do is I'm going to show you how it works. I am going to swap so you can put another guy in. First you have to pick a different chip so it knows that there's a different kind of decoding for its output and then okay and then you have to give it the right biases and there we go. So what you see now are the spikes coming from the cochlear and because it's binaural we label them with different colors. So red is for one year and green is for the other where the overlap you see yellow. And this again is sound coming in so there's a high frequencies down to low frequencies. So if I don't speak there are no spikes because the frame is not updated so the frame gets updated only if new spikes come in and I can go so you see the channels here respond and then somebody has to whistle. Yeah if you whistle then you can see this pattern you know kind of a repetitious pattern show up. So here by exploring the timing between the spikes is an idea of what frequencies are coming in and especially in the hearing way of interpreting signals timing is very important and so we use this for example for doing localization unfortunately my demo is not going to work for that one because of software updates that were done on me before I left and I forgot to check the particular one but the idea is that because you have two years you can use the inter-all which is one of the cues that you have for figuring out where sound source is and by looking at the time between the red and the green spikes you have an idea of what this delta T is and what I wanted to show is even though when you look at this picture and I look along one channel you can see sometimes the red spikes where there are no green spikes but if I just build a histogram of possible delta T's then you can see very quickly 100 spikes you're going to see a peak in this histogram that tells you the likelihood that the sound source in a particular location so you don't have to do cross-correlation with analog spikes coming from the microphone you can just use the digital spikes that you correlate yes so this is some of the work that's ongoing right now I actually have I didn't think you would ask this question but I do have several recordings of the spikes when we decode them back to the audio so you can see one thing painful about the hearing is that I cannot hear my spikes so it's very hard for me to change the parameters and to figure out where I'm sitting in the best place possible so I have the responses that I'm looking for with vision it's easy I can see the output of what I'm doing and so everybody would like to have the spikes converted back to audio but the non-linearities that you have because of the neuron because of the halfway rectification it's not so simple to go backwards you know as if you have a linear system and then you can go backwards to the audio but there's a way, there's a method that we have in place that it works sort of decent on some sets of sounds, not for all of them and you can actually hear the audio of the reconstruction yeah it sounds reasonable but the idea is that you want to be able to test it on any kind of sentences that come in because right now it's trained on some database of sentences and then you reconstruct from the same database if you like but the best is if I can reconstruct on any person saying anything which is also a hard thing for the machine audition world so of course the other thing is that you want to be able to use now this kind of new representation for understanding kind of who is talking in any one moment basically solving all the same problems as what the rest of the audition world is doing and seeing whether there's any value in having the timing of the spikes because the way that it's done right now for sounds is that again you have a clock running, you get a spectrogram you know where the frequencies are that have power and then from then on you use some form of it if you like to code for different speakers or code for different words right and the question is with this representation does it give you a benefit so humans are very good at one thing that machines cannot do so even if it's very noisy someone's talking to you at a party and they call this the cocktail party problem there's lots of sounds going on people talking, you understand the person you're talking to and if you decide you want to switch over and listen to maybe another person talking it's no problem you can switch over but think of it all the stuff coming in is still the same right somehow in your head you're the one who's filtering out the stuff that's useful for you and then you decide you can switch over to somebody else and so we think in situations like that the spectrogram way might not be the best way to do it maybe the fact that the spikes carry the timing of certain events is a way of telling you when if I see some power in some frequency it belongs to him and not to him for example more difficult to hear more difficult to hear in parallel to hear from speech and to understand the rest of them but that's different I think it's more complicated now you need to do understanding of the words that you're hearing and so anyway so this is something that we're actually currently working on there are some so I have a question so is it I think there is some learning aspect as well right when you said a person can distinguish between many voices in a party or anything so I think there is some learning if I consider my experience so I was not into those party environments so when I went first time so I found it difficult to understand what people are saying I realize others are easily understanding when multiple people are talking but gradually I improved so I think it's more about learning as well it's not just the you're learning the proper features if you like so that you can now make a model of that particular speaker of a particular set of words because people pronounce words also differently right because they have accents and so on but it isn't really that easy because people who have decreased hearing and where hearing aids can never learn to sort out the voices in a cocktail party with the hearing aids on so they frequently take off the hearing aids when they're in a noisy environment at just the time you think they would leave them on because they get they can't filter them out so something else is going on that we don't know and it's very early in processing that's right so one thing possible is because there's a huge feedback from the projection back to the cochlear it could be that this feedback is basically amplifying certain frequency bands for you you were mentioning that you do with the pre-hamps you actually enhance part of the sound and or the crease and actually if you had that also in chip with some neurons responding to the different intensities couldn't you actually get a better because you're losing information on that step yes we know that so again every feature that you add in comes with its own bag of problems so it takes a long time to figure out how to get the best circuits so you're talking about automatic gain control with the outer hair cells and so there's some groups in the past that worked on it and the gain control isn't so good because we have this mismatched problem I don't want to say problem or time because sometimes we also say it's a feature so you can set the parameters of the circuit so that maybe the gain control works properly at one filter but then it doesn't work properly for the next filter the next filter, the next filter so eventually what you want is something that works properly for all the filters to get to even the circuit where now the filters can all produce spikes kind of in almost even way if you like took ten years maybe three or four students in different labs working on the circuits to get it to a place where you can use it so some of the stuff is because of the engineering side it takes a long time to come up with good analog circuits that can give you good responses if you like but that is definitely on our pile of lists to do different groups are still trying different solutions but no one yet has a good solution you can find a couple of papers about this where they put the local gain control in and then you'll see one filter looks like this one gets brought in out and it's now its peak is somewhere else who knows where and then so on and so you don't get this lovely set of filters responses that you like so anyway I was I don't think there's any more time but yeah so I'm not going to talk about the last part so many questions so I'm pretty much done let me just put up my knowledge when page everybody does this right anyway I mean a lot of the slides here came from or the demo came from Toby Delberg of course and then yeah and also from our census group where we have to construct all the infrastructure to make sure things get across easily for anyone else who wants to use it yeah alright that's it thanks