 So I'm Jason. Thank you for this opportunity to talk. I'm going to be talking about Dynasim, which is a tool I created for neural modeling in MATLAB. So my original purpose for creating this tool was to make it as easy as possible to build and explore networks of neurons with one or a few compartments, and it's kind of grown from there. So first I'm going to talk about how models are specified in Dynasim and go into some of the technical details of that. Then I'm briefly going to touch on some other features of Dynasim, and then I'm going to talk about some new developments that are exciting and I think will be especially interesting to people here. So in Dynasim functions operate on a few structures. There's one for specifying the model, one for the aggregated model equations, and then one for the simulated data. The user can specify the model in multiple ways, but it always gets converted into a standardized specification, and so the standardized specification divides the model into populations, connections, and mechanisms. So mechanisms are things like intercellular signaling or ion channels or synapses, and they affect the intrinsic neuronal dynamics of cells in a population, and they can also be used to connect populations. Then all of the information in that specification gets put into a single set of equations in the model structure, and then those equations are integrated to produce the simulated data. So the user can specify models using equations or higher-level model objects in strings or in Dynasim structures, and I'll explain what that means as I go. So this slide is just showing two examples of models. One, which is just an arbitrary set of differential equations listed in mathematical notation. You pass that to a Dynasim function called de-simulate, and then it integrates the model. Now Dynasim was created to do a lot more than that, and so the next example shows a Hodgkin-Huxley type neuron with three ion channels, the sodium fast potassium and slow potassium current, and this is kind of a much more efficient way of specifying a model like that. So I'm going to go into more detail how that works. So yeah, now we're going to dive a little deeper into this, and so this is just showing the equations for a single Hodgkin-Huxley neuron, and we can see the voltage dynamics are controlled mainly by two ion currents, and then all of these equations here have been grouped according to those which define those ion currents, and so it's often the case that we have multiple populations, and we want the same ion currents to be in them, so it's useful if this can be made modular. So we can imagine copying those equations into text files, and then we want to somehow connect those equations with the voltage dynamics in a set of populations. We would need some way to link the equations in those text files to the equations outside the text files, defining the voltage dynamics, and so there's this concept of linkers that accomplishes that, and so if you look at the sodium kind of file here, what this says is wherever at current appears outside of the file in some other equation, replace it additively with the sodium current defined inside the file, and so that at current could be replaced with any identifier that appears somewhere else in the model, and the current could be replaced with anything else that's defined within the file, and you can have multiple linkers. So in Dynasem, a modular collection of equations and linkers is called a mechanism, and mechanisms are very, it's a very simple idea, but it's also quite powerful, and so they can be used to define the intrinsic dynamics of cells or to connect cells, or even to link different spatial scales. So here's an example that shows the same Hodgkin-Huxley neuron now defined using these sodium and potassium mechanisms. So you can see on the right the two mechanisms are just listed, and then you see that linker at current in the equation for the voltage dynamics, and so this just says replace that at current with INA plus IK, and you end up with the full set of equations, and you can then go one more step, and then group all of those equations into its own file for a predefined population, and then these predefined populations can be used to very efficiently specify a model that includes multiple populations, and so these model objects, which are mechanisms and populations, are modular and reusable and make it very easy to build up larger models from these predefined objects, and again this seems to be a special useful for experimentalists who don't have any background in the mathematics and want to play with switching out different populations, adding ion channels, adding new intracellular dynamics. So next I'm going to show how these mechanisms can be used to link populations into larger systems. So here we have a model with two populations connected by AMPA and GABA-A synapses, and we can define each population using a neuron model similar to the one on the previous slide, and we've just added some Gaussian noise to that model, and as I mentioned before, the specification structure divides the model into populations, connections, and mechanisms, so we can use that structure to define the two populations, give them names, set the sizes of the populations, and define their dynamics, and then we can use these predefined connection mechanisms for synapse and GABA-A, for AMPA and GABA-A to connect the two populations, and so this same approach can be extended to connecting multiple compartments into a multi-compartment cell, and it can also be used to connect populations that are actually models at different scales, so you can imagine having a spiking network and a neural mass model of a larger system, and you can link them using a connection mechanism that say sums all the voltages or the currents over the spiking network, and then feeds that into the neural mass model, so in principle, this approach can be used to specify any neural system that can be described by differential equations, so next I'm just going to briefly talk about some features of Dynosim, and then shift to something that I think is a little more interesting, so another goal of Dynosim was to make it very easy to explore regions of parameter space, so to do this, I created an efficient specification of the parameter space, and then made it so that Dynosim automates the process of running simulations in parallel, and then also created functions that make it easy to work with all of the data that gets produced, so here's just one example that shows the parameter space where we have three different strengths of drive to the e-population, three different time constants for the feedback inhibition, the Cartesian product of that gives nine sets of parameters, and then you just pass that to this de-simulate function, and it will run all nine simulations, and then there are these functions for plotting all of the results very quickly, so you can get raster plots, power plots, time frequency plots, this on the right is showing the mean firing rate as a function of the two parameters, and there are many more things that you can do like that in Dynosim, so in terms of performance, Dynosim in MATLAB is kind of comparable to Brian2 in Python, and the benchmark here is just showing a Hodgkin-Huxley network, changing the number of cells, in addition to running like the code in an interpreted mode, you can actually tell Dynosim to compile it into a mechs file, and that can give you a speed up of about 10 to 100x. Then when you're running sets of simulations, Dynosim automates the process of running them in parallel on different cores of a computer, or creating jobs and distributing them to a cluster so they can run in parallel on different nodes, and then there are functions for loading all of that data and analyzing it, and Dynosim is also available on the Neuroscience Gateway with these features, and it also has a graphical interface that is a useful teaching tool, I find, and you can use it for building the models, exploring the models. I'm not going to say much about it right now, but I'm doing a demo later, and if you're interested, I can show it to you there. So now, visions of the future. So we've been talking about kind of building up these models from cells, populations, networks, and now we're going to kind of shift our focus and think about how higher level data can be used to constrain these models, and so we can imagine if we have a bunch of circuits and we connect them into larger systems, we can then compare that to neuroimaging data, and so there's a framework in MATLAB already for doing that in the SPM toolbox, but the limitation of it is it only works with neuromass models. So you can't include any lower level biological details. So what we want to do is take neuroimaging data like this, and then constrain a model that has these type of details in it. And so the kind of framework for doing this in MATLAB that exists already is using this SPM toolbox, which does neuroimaging data analysis. And so you get what are called statistical parametric maps that show the activity mapped onto the brain. And then using that toolbox, you can extract features from that. And so here it's showing an event related potential for two different conditions of a task. And you can also extract spectral properties and a wide range of features. So this DCM component of the SPM toolbox specifies a model and uses Bayesian inference to fit that model to the extracted features. And so as it exists now, it only works with these graph of neuromass models. And so it essentially uses Bayesian inference to adjust the connectivity weights between the nodes in this in this model, and then figures out what type of changes can map the ERP, let's say from one condition to another condition. And this works with EEG, MEG, fMRI, ECOG, and a wide range of features. But it has this limitation of only working with neural mass models. So it's successfully linked systems level modeling with systems and cognitive neuroscience. But it's lacking a lot of the lower level of biological details that neuroscientists often care about. So the idea here is to replace DCM with Dynasem for the neural model specification and simulation. And by doing that, we can then bring in any of the models that can be implemented in Dynasem. And so it can be a model of a multi-compartment neurons connected into circuits linked to different regions and cortex. And you take those dynamics, define some observation model that maps them onto the features extracted from the neuroimaging data. And then the Bayesian inference will try to fit some set of parameters to those features. And so the kind of proof of concept for this is almost complete. So the next thing after that is to kind of make it more usable. But I think this is an exciting direction for linking cognitive neuroscience with lower level biology. And you could do things like looking at the impact of say drugs on neural system dynamics associated with changes in cognitive performance. And so kind of the bigger picture here is for this to really be powerful, we need to use a larger neuroinformatics infrastructure. And so to do this optimization with these detailed models, we could link to something like the neuroscience gateway and then distribute the simulations there, have them run on their machines, get back the results, then do the optimization on that in an iterative process. And since Dynasim is able to specify any neural model, we could then take these more complicated models from databases like the open source brain or the Allen Institute or Human Brain Project, feed those in and then do the optimization on those models. And then finally we could take the optimized models and then give them back to the community by exporting them to say open source brain or some other repository using neuro ML, let's see. And then, yeah, there are other things we could do. We could optimize the models using different types of experimental data or use different optimization methods. And yeah, so some interesting directions. That's it. If you have any questions, happy to take them. I'm doing a demo later. Thank you for listening. Any questions for Jason? Hi. Okay. Great talk of the work. I had a question related to the last things you were talking about just now with the integration with DCM. And with in my experience, I mean, the very variational Bayesian inversion routines that they have there for the for the neural mass models, they're really tailored towards models where they have an analytically defined gradient. So that's almost never, well, there's never going to be the case of the models you're talking about. It's almost never the case with neural mass models in general, even though they're relatively simple. And in my experience, when you're working, when using that those tools like the SPM NLSI, the non linear system identification optimization routine is incredibly slow. When you just give it a black box model and don't, don't give it that analytic definition for the Jacobian. So is it but it sounds like you've got pretty far with this with the something that I would have thought just from from what I know, it's pretty hard. So is that something that you've encountered? And does this sound familiar? Yes. So so far, it's kind of tractable using just a few nodes, each with populations that have tens to maybe a hundred cells. And that can take a very long time to do the optimization without an analytical solution. For instance, it could take days, which is why it's important to kind of, I think incorporate these clusters for doing this. Maybe instead of being days, we could get it down to a few hours. But yeah, that's a big challenge. I think it's possible. And how many neurons we need for this to be useful is an open question. If it needs more, we could simplify the neuron models use something like a zikovich neurons. But there's going to be a trade off, you know, for a long time between the complexity of the model that we can optimize and the infrastructure that we're using. Hopefully, eventually, it'll be powerful enough. Any more questions? Have you thought about the so I mean, there have been a lot of simulators have been developed over the years. And the majority of them aren't usable anymore, because the people who developed them have left signs or have moved on to other things. So I mean, how sustainable is Daniel Sim? Excellent question. So so far, I've done this without funding. And so I'm going to be applying for some grants to get funding. If I'm able to get funding, then it'll be developed for sure for at least a few more years. And there are other people working on it. And that I've been trying to recruit to kind of take over the development development, moving it forward in other directions. But still without funding, you know, you could run into the same problem of it dying off. So it really comes down to that. I'll be using this at least for the next two or three years, even without funding and developing it. And if it catches on, then I'll keep developing it. Yeah, I think the key is to push its unique properties. So I guess, you know, why would someone choose to use dinosaur compared to say, Brian to, I guess, is if you want to do this with other math lab tools like like SPM, and then that's a very good reason. So I think if you push the unique selling points that you need to get more users to make it sustainable in the long term. While the next thing you're setting up, I think we have one more question from Greg. Yeah, just a quick question.