 I will talk about the Brian simulator, which is the simulator for neural network models. So these days the Brian simulator is mostly developed by Dan Goodman in Imperial College and Marcel Stimberg in my lab. So a lot of current thinking in neural network dynamics is framed in connectionism. The idea that the function of neural networks arise from interconnected networks of simple elements where the important structural parameters are connection strength. Because I will try to show you here that classical connectionism is increasingly out of face with neural models that are used in the field. And so we need to take this into account in simulators. So just a quick historical recap. Connectionism comes essentially from Macaulay and Pitz who proposed the first binary neural model where the activity of a neuron is a function of a weighted sum of the activity of presynaptic neurons. And this, in fact, so in this model, you have two aspects. The first one is that you have binary units. And second, most important aspect is that there is no time, basically. There is discrete time and neural activities describe as operations on binary elements. Why is there no time in this model? In fact, this is not the first neural model that you can find in the literature. In the 1930s, so maybe 15 or 10 years before Macaulay and Pitz, you could find models in continuous time, very similar to those ones, but dynamical systems proposed by Nikolay Rechevsky. So why does this one became very well known and not this one? Well, the reason is that, and the motivation for the introduction of this model is that by framing network function as operations on binary units, you could show, and that's what they did in their seminal paper, that the operation of this of a neural network is isomorphic to calculus and logical propositions. So that was the appeal of this model. And if you make the assumption that time is not important and you're just applying operations on activities, then the most important parameters in your system are the weights, parameters that correspond to each of the connections. And of course, a few parameters that correspond to the function that is applied here, in particular the threshold. Now this is what, from that follows connectionism, is the idea that neural network function is mostly determined by these structural parameters. And so learning consists in modifying those weights. Now if you look at neural models that are used nowadays in neuroscience research, they are typically dynamical systems of the hybrid type that is, that consists in a mixture of discrete events and continuous dynamics, discrete events that correspond to what happens when a spike is received, a condition for spiking, what happens after the spike, and continuous dynamics, which is what happens between spikes. So these are dynamical systems. And in particular in those models, time is very important, is intrinsic in fact. And this very simple observation has important implication because now structural parameters in those models are not just synaptic strengths, but everything else you have in your models. And for example, if you look at the response, at the pre-synaptic spike, so the response of a neuron to a pre-synaptic spike, then you have a response that is a function of time. Here a function in particular that you can characterize by the amplitude, but also by the duration and any other aspect that you may think of. For example, conduction delay and other things. So that implies that classical connectionism, at least in its basic form, does not work anymore for this kind of model. It's not just the synaptic strength that are important. But what do we know about how this other parameters, which are many, are learned in biological system? The answer, the quick answer to that is next to nothing probably. But what we do know is that almost every element of structure in a neural system is plastic, at least is dynamic. So this is an illustration of a pyramidal cortical cell. And you have lots of structural elements here. Of course, you have synapses and the strengths. But there's also dynamics, short-term dynamics in the synapses. There's also the initial axonal segment here where the sparks are produced. And this is plastic. In fact, it can move depending on activity. The ionic channels are subcuting along the dendrites. And those are also plastic in an activity-dependent way, et cetera. And also the properties of those ionic channels are plastic. So the point is basically every parameter in the models that we use in neuroscience research is plastic, but we generally don't take this into account. And the reason is we don't really know how it works, how those parameters change. So as I mentioned in the introduction, because we don't know a lot about all these aspects, we are mostly focused on synaptic weights and learning based on activity of those weights. But what is going to emerge, I anticipate in the future, is models that take into account the plasticity of all those other non-connectionist structural parameters. So now the challenge is how do you simulate post-connectionist models? Well, the big problem that you have to face is flexibility. That is, there's a lot that is really a lot that we don't know about neurons, and so it's very difficult, in fact, or perhaps even dangerous, to come with very standardized formulations of models, because we don't really know how it's actually going to turn out. And the problem of flexibility is the main problem that we tried to face in the first version of Brian. The second problem is to simulate those flexible models in a way that is efficient. And that is the problem we tried to address in the second version of Brian. So I will first talk about the first problem, that is flexibility. So initially, when we started Brian in 2008, our focus was on designing a simulator that is simple to use and flexible, that is, that you can use to simulate a model that you hadn't thought of at the time when the simulator was written. The motto of Brian is, the simulator should not only save the time of processes, but also the time of scientists. And so with this in mind, the design shows that we made is that models, instead of being pre-specified components as was probably the most often the case in simulators at the time, well, instead of that, the models are defined by their mathematical equations, because this is a standard that already exists out there, mathematical equations. And I will give you an example. So this is here, an example of a simple neural network consisting of extraterrestrial and inhibitory neurons that are randomly connected, integrate and fire type neuron models. And you have here the three equations of a model. And this is Brian's script that simulates the entire model and produces this output here. Basic thing is that equations are given directly in their mathematical forms, together with the units, the physical units, parameters are also given with the physical units, everything is explicit. The bottom line is the model is specified as close as possible to the mathematical definitions, including the threshold, which is a Boolean condition, and reset, which is the series of statements. And this is true for the neuron models, but also for the synapses. What we have for synapses, and this is one example that for spy timing dependent plasticity. So for synapses, you can also describe the models in their mathematical forms by giving the local synaptic variables, how they change with time. So these are dynamical equations. And what happens when you have a pre-synaptic spike? What happens when you have a post-synaptic spike? So this is an example that corresponds to classical additive STDP. But you can do a lot of different models. Not everything is possible, of course. For example, heterosynaptic plasticity does not really fit this framework. But there is quite a bit of flexibility there. All right, the second problem, and that is what we try to face with Brian II, is speed. So in Brian I, the choice that we made was to use Python and to interpret the simulation. Because that gave us a lot of flexibility on what could be simulated. But of course, at the cost of speed, because Python is an interpreted language. We had a trick which was to use a vectorization, which is to apply operations simultaneously to entire vectors, so sets of neurons. But that doesn't work so well if you want to do small networks for a long period of time, for example. So in Brian II, so it's a complete rewrite of Brian, that instead of interpretation, uses code generation. From the user point of view, it's basically the same. But what happens behind the scenes is that these equations are transformed into code that is then executed on different possible targets. For example, a PC, or a GPU, or some neuromorphic hardware, or FPGA, or even an Android smartphone. And for that, Brian now transforms the equations, the entire models, into code. So in this case, I'm going to use a QoC code, that is automatically generated and specific of the target. So the way we do it is in two steps. And this is illustrated for neuron models. The first step is to transform the model into an abstract code. The abstract code is not specific of a target, it's just a series of instructions in a language that is close to Python. A series of operations that correspond to what should be done in the equations. So in the script, you define the equations as mathematical equations. And Brian combines it with specification of the numerical integration scheme, which is also specified in a mathematical way. Combines it into a series of statements. And then in the second step, this abstract code is combined with a namespace, which corresponds to different types of variables, and transform into code that is specific of the target. So this step is specific of the target. So there are two modes of running. In Brian 2, one we call the runtime mode. In runtime mode, the script is still partly interpreted. That is, for each object, the model that corresponds to each object is turned into code. And when the run statement is executed, it runs the code for each object in terms. So it's partly interpreted for the loop and partly code generated. In the standalone mode, Brian takes the entire program up to the run statement and then outputs an entire program that is then entirely executed on the target. So this is, of course, much faster. And you can also use it for embedded platforms, for example. So Brian generates a set of files that you can see here. The set of files that are generated for the C++ target. This files can be edited afterwards. So the future of Brian is basically the development of different targets for code generation. What we have currently is PCs for Python, obviously, but for also C++. We also have some support for GPUs, projects, an ongoing project for Spinacure and Android smartphone. And we also want to address FPGA computing. So let me just finish by thanking the main developers of Brian, who are Dan Gurman, who is now a lecturer in Emperor College, London, and Marcel Stimberg, who is a postdoc with me in my lab in Paris. And this is the main publication on Brian, too. Thank you. Questions? Forms. It's already done by compiler infrastructures, such as LLVM, or things like that. And they also have already solved the problem of intermediate language representation. Do you consider such technologies as a kind of code generation back end? Do you mean if it could be a target? No, it could be used to generate code for the target, because they already support all the targets or many of the targets that you've shown there? Well, I'm not sure. You can simulate neural networks with the flexibility that we have here, though. Just curious, why you need to run Brian on the mobile device? Ah, OK. So the motivation for this, so currently we have just a prototype that runs on Android. Motivation was to use it for embedded platforms, that is, to put it on robots. And so a number of scientists use Android smartphones, because you can program them. They are small, and they have a number of sensors on them. So they are quite useful for robotics research, basically. The relation with Neuron? Other software systems for simulating multi-compartment neurons? Ah, multi-compartmental neurons. OK, so initially, Brian was not designed for multi-compartmental models. Now it's coming. I mean, it's in Brian, too. You can do multi-compartmental neurons. Yeah. So as I said, initially, Brian was not made for biophysical models, like multi-compartmental models, but rather for networks of relatively simple phenomenological models. And for that, I don't think Neuron is very well suited. Not that it cannot simulate them. I mean, there are all the other simulators. If you spend enough time with them, can simulate all the models. It's just that it will take you a lot of time to do it. So our motivation was to design something that takes you, the scientist, very little time to build a new model. And so now there is a bit of multi-compartmental modeling, too, in Brian. And with the same ID in mind, we're not trying to have all the features that Neuron has, but simply to have something that is simple to use, easy to develop new things with. I was also wondering, what about, is it compatible with like 9ML or Neuromel or something like that, can you? So as I understand it, 9ML, the 9ML syntax is very close to Brian. To Brian one, at least. I don't think that's an accident. And so, yeah, we had a project of converting between Brian's specifications, model specification in 9ML. And this should be relatively easy, in fact, because they are quite close, I believe. Okay, thank you. Thanks. Can we go to the-