 också. Det är det varje around that that Supercomputing kan ge in från our understanding of the brain. Så först av all, we know that the neuroscience field needs Supercomputing. We have a lot of data. It's fragmented and we lack this mechanistic understanding of the brain. And one way to improve that is, is of course to try to do a multi-scale integration of the knowledge and data that we have and then modeling and simulations of various aspects of the brain is a way forward. And sometimes that also requires real time simulations or sometimes even simulation that goes faster than that because if you want to study development or plasticity, learning and so on. We might not want to wait as long as it takes in reality. So we might even want to go beyond real time simulations. And this we will hear about represented by the speakers today how to speed up the simulations as much as possible. Also, it's the other way around that future computing will gain from our improved understanding of the brain. So we already know that our brains have amazing information capabilities and they are much more energy efficient than today's computers. And also they are robust, full tolerant and they can adapt to a changing environment and all of these things we can get inspiration from when when trying to develop future computing technologies. And we will also hear examples of that during the workshop how to implement more brain like capabilities on onto your morphic computing systems. So we have four talks, 20, 25 minutes each, which will exemplify these things. We will hear about the challenge to simulate a large part of the cerebellum. That means that we have to really have efficient systems. And we will also hear how how a simulation tool called Brian can can help us when we want to switch between different platforms in an efficient way. And we will also hear how how neuromorphic engineering systems can can be inspired by the how the brain works. And also we will get exemplified how we can enhance neuromorphic systems with brain like capability, such as learning and plasticity. OK, so the first speaker is. Hadashi Yamazaki. Han har en komputer science från Tokyo Institute of Technology. Han var en research scientist at the Rieken Rainscience Institute. Och nu har han en assistent professor of mathematical information sciences at the University of Electro communication in Tokyo. Tack för en introduktion, och det är ett tack till organiserat av this Congress for inviting me to this world, this Congress, the organiser for inviting me to this Congress. But actually this is my first time to visit Australia, so I'm very excited about this. So it's big thank you for you all. And today I'd like to talk about the cerebellum. Most of you know that the cerebellum, this one, this small one. Yeah, this one. And actually this is small. The cerebellum is a Latin word meaning the small brain. It's actually small. It only occupies the 10% of the whole brain volume. So it's small. However, the cerebellum contains about 80% of neurons in the whole brain. So it makes me the cerebellum really active. So the cerebellum does something very useful using this animals number of neurons. That's the point. Another important thing of the cerebellum is that the spontaneous firing rate is very high compared to the cerebellar cortical neurons. So this means that even when we are just resting, our cerebellum is doing something very useful for some purposes. It's a kind of mystery. So what does the cerebellum do? Traditionally, the cerebellum is thought to play an important role for motor learning and motor control. These days, however, several lines of evidence suggest that the cerebellum is involved even for cognitive functions such as working memory. And in our current understanding, the cerebellum is thought to learn or acquire internal models and use it for these tasks. Let me explain what an internal model is and how the cerebellum uses it for these tasks. The pink boxes represent higher motor cortexes and primary motor cortex, som are sensory cortexes and some controlled body parts such as an arm. This is a schematic of feedback error learning proposed by Dr. Kabato many years ago. So these pink regions represent some feedback control of our body parts. The higher motor area from the desired movement and the primary motor cortex generates a motor command to achieve this desired movement. And the motor command generally moves the actual body parts such as an arm. The desired movement was fed back to the primary motor cortex to be at the sensory cortex. So this is a feedback control. And the green box is the cerebellum and the cerebellum is attached like this and sharing the same input from the higher motor cortex with this primary motor cortex. The cerebellum learns an internal model of the primary motor cortex, which is that table of input and output relationship of the primary motor cortex. So actually the cerebellum can try to mimic the dynamics of the primary motor cortex. And after the learning, the cerebellum offloads the computation performed by the primary motor cortex originally and it replaces the feedback control with feed forward control by this. So this is what the cerebellum is thought to do in our current understanding. So the role of the cerebellum may be the computation offloading of the primary motor cortex or the cerebellar cortex. How is the cerebellar circuit? This is an illustration of a cerebral cortical nuclear micro complex. This is this is thought to be done from functional module of the cerebellum. The micro complex is composed of just seven major types of neurons. They are granules cells, Godi cells, Pukinji cells, cell cells, basket cells and inferior body cells and cerebellar nuclear neurons. And I already, I already thought that the cerebellum contains about 80% of neurons in the whole brain and most of them are cerebral granules cells. They are the smallest neurons in the whole brain and the number is enormous. Actually, they are the summer size is just five micrometer and they are densely packed as dense as one million neurons per one cube millimeter. So the cerebellum is composed of many small tiny computational elements granules cells. Moreover, the cerebellum has a modular organization of micro complex seeds. So the cerebellum is organized by a copy and paste of these micro complexes and the whole the cerebellum is generated. So summarizing the functional role of the cerebellum is a computational offloading of the cerebral cortex and the structure is a many core architecture composed of cerebellar granules cells and has a modular organization of micro complexes. So these properties remind us the graphics processing unit. This is a figure of NVIDIA graphics GPU, maybe K20, perhaps. And GPU, the computational functional module is a streaming merge processor and one merge process streaming merge processor is composed of many computational elements called the quads. And a whole GPU is a copy and paste of the streaming merge processors. So we may say that if the cerebral cortex was a CPU, then the cerebellum would be a GPU. This is my first message. I've been building a very large scale spiking network model of the cerebellum to date. And next I will explain how I implement it and how we conduct numerical simulation efficiently. So this is a schematic of the cerebellum model that we have been building. And this is actually a model of one cube millimeter of the cerebellum. And so it contains more than one million granules cells. And we also contain some other types of neurons with these numbers. And they are modeled as conductance-based RIKI Internet Fire Units. The neurons are connected according to the anatomical data. And the set of parameters are taken from rodents electrophysiological data. It's kind of realistic network. Moreover, there are two plasticity sites. One is at parallel fiber parking cell synapses and which undergo a long-term depression or LTD by conjunctive activation of parallel fibers and climbing fiber. The other site is around here, mostly around here, mostly fiber cerebellum nuclear cell synapses, which undergo both bidirectional plasticity, both LTP and LTD, by some Fabian mechanisms. So once we build this kind of model, we need to conduct a numerical simulation. This is a generalized pursuit code of computer simulation for neural network. Most of you know that this kind of code. The outer loop represents this. This is a loop for time. And for each time step, we need to calculate some neuron dynamics. The inner loop represents a loop for neurons. And for each neuron and for each time step, we calculate the synaptic conductance and update the increment of membrane potential, and we update all the neuron states. And these three steps must be done sequentially. If we focus on this calculation, the calculation of one, this calculation of just one neuron is independent of other neurons. So using GPUs, we can parallelize this part with respect to the neurons. Here note that we cannot calculate different, we cannot perform different calculations on different GPUs or different GPU cores. All of the GPU cores must perform the same calculation on different datasets. So this kind of computation is called single instruction and multiple state, called SIMT for short. So once again, the same calculation on different datasets must be performed on the GPU. So of course, this is a limitation. We cannot perform any kind of calculations on GPUs. We must perform only the same calculation on the old GPU cores. So this is a limitation, but SIMT is actually ideal for the server model, by the following reasons. The obvious bottleneck of computer simulation of our cellular model is, of course, one million granule cells. And all the granule cells obey the same dynamic equations. So we can just perform the same calculation. And of course, cell parameters are different across neurons, so they are different datasets. And we have some other neurons, of course, godises on prokinesis and so on, but their numbers are much, much smaller than the granule cells, so their effects are very negligible. So SIMT is ideal for our cellular model. And so GPU is very good for simulating the cellular network. We have been using four GPUs simultaneously. Actually, we use two NVIDIA-G for Titan Z. And each Titan Z board contains two G for Titan, so we use four GPUs efficiently. And using these four GPUs simultaneously, we achieve a real-time simulation, where real-time means that computer simulation of a cellular activity for one second finishes within one second in real-world time, with temporal resolution just one millisecond. So this is very nice. For further information, my student, Masato, will explain to you after his poster afternoon, after the workshop. So please ask him the poster number P06. Now I will introduce some applications of our cellular model. So real-time computing has at least two benefits. One is that real-time signal processing is possible, which is necessary for some engineering applications such as robotics. The other merit is that we can conduct very long-time computer simulation maybe for days or weeks in a practical and realistic time. So the first example is some batting robot. It's just a demonstration. It's not so very interesting from the engineering viewpoint, but anyway, let me explain. So this is a robot. It has some fun or a bat, and behind the robot there is some backnet. And around here there is some toy pitching machine, and the pitching machine throws a small ball, and the robot tries to hit the bat by this fan or bat, but of course the robot fails. When the robot fails to hit the ball, the ball hits the backnet, and when the ball hits the backnet, there is an sensor attached to the backnet, and that detects the error. So using this error information, our cellular model learns the correct timing to swing the bat. So the task is learning the correct timing of when the ball comes and when the robot should swing the bat to hit the ball correctly. And this is actually an analogy of the hybrid conditioning. Hybrid conditioning is an expanded paradigm, which is used to study cellular turning mechanisms. So let me show you some movie. And the first time the robot does nothing. And during training, the robot starts to swing the bat, but still it delays, and after enough time, the robot can hit the ball. Maybe once again. So it fails, so it swings, but still fails, still delays, and after that the robot learns the correct timing to hit the ball. So this is just a demonstration of real-time signal processing using the cellular model, so it might not be so interesting, but anyway. So another example is that we can conduct a very long time computer simulation to understand the memory formation process within the cellular model. Memory formation has at least two stages. The first stage is memory acquisition, which occurs during training, and the second stage is memory consolidation, which occurs after training to consolidate round memory. Dr. Nagao and his colleagues showed these two-stage processes by very beautiful experiments. They adopted gain adaptation of the kinetic response, and I've been to reflex, and they trained mice every day by one-hour daily training. By one-hour daily training, the OKR gain. The OKR gain is some amplitude of the eye movement. The OKR gain increases from here to here, and after the training the animals go to the cage and just relax, so the round gain disappears. And the next day, again, one-hour training, and the OKR gain increases, and after that the round gain disappears. So you can see that these gain increases represent the memory acquisition during the training, and you can see that the OKR gain gradually increases towards the five-day. So this process represents post-training memory consolidation. Post-training memory consolidation is very interesting because during the memory consolidation, animals do nothing. They are just resting. Even when animals are just resting, their cerebellum is doing something to consolidate round memory. So I'm interested in the mechanisms or process of the post-training memory consolidation, and I conducted a very long-term computer simulation using our model. Basically, the computer simulation is very slow. It sometimes takes 100 times slower than the real-time. But thanks to the real-time computing using multi-GPUs, we can conduct this computer simulation of one-week motor running training within just one week. So we continuously calculated this computer simulation using one-week. And the left figure shows the simulated OKR gain, and you can see that the OKR gain increases by another training, and after that the gain disappears. But throughout the five-days round gain gradually increases. And the right panel, and this is consistent with experimental data. The right panel shows the simulated eye-movement trajectory and before and after the training and first day, second day, third day, fourth day, and fifth day. And this is also consistent with experimental data of eye-movement trajectory. So we were able to successfully reproduce eye-movement OKR gain change for one week. It's a very long term simulation. That's the temporal resolution just one millisecond. So it's a very long term. And again, the master will explain the details by his poster about this issue, so please ask him. So maybe I'm almost finishing, and let me discuss the challenges and future directions of neuromorphic computing. I think there are two directions. One is engineering applications, such as robotics, whereas real-time computing is a key issue. And the other is some scientific tool. So we develop a model as a scientific tool to obtain a better understanding of how the brain works. And in this case, we need a model which has a finer spatial scale and a finer temporal scale. It must be a very large realistic and detailed elaborated model. So of course there are two directions, but I think that processing both directions is very important. And including the loop between the science and engineering is very important to obtain a better understanding of what the brain computes and how the brain works. So that's almost all of my talk, but let me have some announcement. So next month we have a annual conference of Japan in the Neural Network Society, and we'll have a special session on neuromorphic computing and high-performance computing with state speakers. And they will talk about CAKE supercomputer and FPGA. It's a program of hardware and a spinnaker. It's a dedicated neuromorphic chip and GPUs. And after this session we'll have an invited talk about neuromorphic computing in IBM by Dr. Koichi Kajitani from IBM Research Japan. And he will talk about IBM's latest neuromorphic chip, TRUNOS, which was published in science last year. So we are very excited about this talk. And if you are interested in these issues, please join us next month in Tokyo. So finally, I'd like to thank my collaborators and funding. So thank you for your attention. So we have time for some questions. Maybe I missed something. But what's the plasticity model, synapse model and neural model you see in your system? Plasticity. Yes, and synapse and neural. Okay, we have two plasticity sites. And this is one parallel fiber parking set synapse. The synaptic rate under was long term depression by the conjunctive activation of parallel fibers and the climbing fiber. Parallel fiber conveys some contextual information and the climbing fiber conveys error information. It's a kind of error. And the another plasticity is Moshi fiber, celebrity nuclear cell synapse. This is just happy and learning. Is that okay? So for your neural model, that's all single compartment? Single compartment, yes. So your real time is based on this single compartment model. Real time, you claim that you can do real time simulation. Yeah, real time simulation. Roughly 1 million neurons. That's all single compartment. And how many synapse per neuron is connected? One prokinesis, we have actually 16 prokinesis and one prokinesis that have 0.2 million synapsis. 200,000 synapsis for each prokinesis. And we have 16 prokinesis. So actually, I also had a follow up question on the plasticity out of curiosity. So can you get away with just LTD between parallel fibers and the prokinesis cells or you have to have some mechanism that reverse that? Could you say it again? Yeah, so you said you had plasticity between the parallel fibers and the prokinesis cells, LTD. Can you get away with only downregulating the synaptic connective? I see, I see. Actually, there is an LTP also. An LTP is induced by solar activation of parallel fibers. But I just omitted this explanation. Sorry. Because we need to recover the depressed synapsis to the normal level for some purposes. So actually post-plasticity are included. Okay. Thank you. And then we go to the next speaker.