 I'm going to talk a little bit about how we connect multi-scale modeling approaches in MS to building computational communities. I'm going to talk a little bit about science. I'm going to talk a little bit about software. I'm going to talk a little bit about community. There's going to be kind of the three themes through my talk. And I'm going to start the conversation with a bit of a survey of a couple of the approaches that have been done, the kind of flesh out what we mean by multi-scale modeling. So first we're going to talk about how we look at some models that go from axons down to molecules, how we look at models that come up from molecules to levels of cells. Now, the point that I get there, there's still going to be a significant gap. That's kind of the state that we are in right now. But we're going to talk about how we address that gap through the creation of software platforms, specifically open platforms that are modular. And we're going to talk about how that also needs to be supported by a community of folks that come together to help fill in that gap. So ultimately we can get to integrated disease models. So the state of the landscape right now for multiple sclerosis system biology modeling is fairly sparse. It's mostly composed of axon models with various cosign in the realm of biophysics. A truly integrated model does not exist today. The kind that we'd like that would cover all the relevant processes, that's just kind of what we're given right now. But we think that there are really important things that are being done already in the multi-scale world that help us move this forward. And I'd like to talk to you a little bit about some of those. So first, axons to molecules. All right, so we know that multiple sclerosis is a condition that largely affects the white matter. And we know that the white matter, we're talking about the white matter, we're really talking about the myelin sheets. And we're talking about the condition where myelin sheets that were there today go away and the way that that affects axonal conduction. So Kogan 2011 is one example of a paper that used multi-compartmental modeling to do a view of what computationally, to play around with the ideas of what computationally you can get when you have a model that first in one condition has myelin and looking at axonal conduction through that, doing ion channel modeling. And then you take away that myelin and what that looks like. So the model is able to reproduce conditions that are known to affect axons such as failed conduction where you have an action potential that starts on one side but doesn't make it to the other end at all. You have recovered conduction where you have an action potential which starts, it doesn't show up at exactly the right time that it's supposed to but does make it across. Conditions where you have evoked after discharge where you've got an action potential and then way too many at the end. And cases where you've got spontaneous ectopic spiking where you've actually got action potentials coming back in the other direction. So these are things that you can start to explore computationally under these conditions with ion channel based modeling so that you can see progression of the action potential under normal conditions or under demyelinated conditions where you're not getting as many action potentials as you want or looking at these after conductance compensations. And these kinds of models basically go from what do axons do to what some of the ion channel dynamics are underlying them. But if we look a little bit further at the anatomy of the problem, we realize of course that neurons or cells too and axons in fact have a rich subcellular set of activity going on. So if we peel away the myelin sheets, of course those are just cells themselves, we find that there are lots of things that regulate ion channels at this level. And so it's important for us in a deeper understanding of the processes underlying this for us to dig in a bit further. And there has been modeling now going at a smaller scale, a smaller spatial scale that's looked at the EM level to reconstruct nodes of Ronbie. So now we're looking at a cutaway of myelin sheets here and an axon running through here and the node in the middle. And this is looking and on at different slices. And in fact there have been efforts to reconstruct computationally in 3D at a much higher level of resolution. What's going on in this case in models that are known as electro diffusion models. So here we're no longer representing ion channels as elements in equation but we're actually looking at individual molecules floating around in concentrations. So models such as this published by LePriori in 2008 actually use finite element modeling to drill into how different concentrations of sodium and potassium during the course of the action potential are changed and are modified. And the key point here is that this is now a jumping off point to get down to what's happening at the subcellular level. And importantly these models need to be reconciled with each other in order for us to be able to kind of cross these boundaries. So the idea is not that you build one simulation that's at the highest level of detail but that you build simulations at high detail where they're needed and then compare them up to models of lower resolution. So this is an example of then taking those kinds of cable models like I showed you before and the electro diffusion model which I just showed you and comparing how actual potentials look between the two. One of course taking a lot more compute time one taking a lot less compute time but now once you've got it fit together you can start this is kind of the link in the chain that lets you cross from one scale to another. Okay and so as I said there's quite a bit going on in terms of the internals of these cells. So that takes us from axons to molecules. Now we wanna go from molecules up to cells. So some things that really are not so much the realm of computational neuroscience but now get us to computational biology and let's take a brief tour at what's happening inside computational biology by picking one paper for the moment. And again as I mentioned before there's gonna be a gap and that's the state of where things are but let's accept that and talk about how we address that in the next section. So one I think breakthrough recently that has come out is a wholesale computational model that not just to look at individual processes inside cells but to try to do its best to combine 28 different underlying biological processes together in a single integrated model where you're able to essentially put in a digital genome and get out predictions of phenotype. So to go all the way from genotype to phenotype using a model that has a lot of parameters, it's fairly complex, has 28 different cell processes all the ones listed here is literally an algorithm that runs it and combine them together. Seems like an audacious ambitious thing to do. I highly recommend you check out this paper. It came out in cell which is also interesting as it's not a typical place for computational output. But so let's look a little bit at what went into it. So it's built on top of knowledge compiled in a lot of different resources that have been assembled over the years. So you've got PubChem, you've got Uniprot, you've got DrugBank, you've got lots of data that were combined from experiments that were done in microbes of many different kinds. And of course this is a prokaryotic cell. There's a big difference between prokaryotic cell and eukaryotic cell and we acknowledge that that's sort of the gap part. But what's possible at this level tells us where the future is going. So what was it able to do? The level of resolution that it had, you're able to look at the cell, go through different phases of the cell cycle from replication initiation to replication and then going all the way to dividing. And you're able to track different species of molecules represented as state variables inside the model over those phases. So you're able to look at to see if the right species are going up when they should be and during the part of the cell cycle or to see if they're going down at the right point and that sort of thing. And the important thing of course is where these models confront the real data. And so I think the real success of this model is taking real data that was derived from looking, observing this particular microbe growing over time, seeing how fast it grew and as well playing around with gene deletion type experiments. So one of the things about this particular cell type is that it's easy to genetically engineer, make a lot of mutants. And so they were able, so if you claim to have a genotype to phenotype model, it better be able to be robust under conditions where you change the genome and see if that actually affects the phenotype. And sure enough, this over here on the left is an example of looking at, so growth rates of this particular organism under different genetic mutants. So every dot here is a different genotype and the position that it tells you what its growth rate is and this is the predicted growth rate constant. Okay, and so where, so wild type was where it was trained here in the middle and so that's gonna sit right on unity as it should and as different genotypes are put together here, they sort of fall along this line. You can see there's a pretty good relationship. Some of the outliers that are seen here were then further explored in the model to understand where there was a difference between the two and then that was used to better improve the model and also actually make some discoveries about how that genotype was working. And then importantly, so that's the growth rate part and then the gene deletion part and this is I think the part that's kind of the most interesting is that as they looked at taking out different genes from this model and comparing them to see whether like removing a gene was essential or not essential for the survival of that cell and comparing that to reality, they got an 80% and 80% match rate which is pretty impressive I think for any model. And so let's see how much further this can go into the future but this gives you a sense that now we're starting to see on the one hand an ability to come down from the activity, the behavior of a cell like an actual potential and you're able to come up from sort of genotype to phenotypic type of measurements. Of course there's still a gap but the question is what do we do about that and how do we move forward as a scientific enterprise to try and fill that gap. So what we'd like to see in the future, we'd like to see integrated models, we'd like to see subcellular processes of interest we'd like to have eukaryotic cells with mitochondrial function, we'd like to have them interacting with other processes, so it's gonna take a lip to get there. We'd like to see myelinated neurons in both the CNS and PNS, we'd like to see models of oligodendrocytes and Schwann cells, like people aren't working on that right now, right? We'd like to see models of the immune cells that are known to attack the myelin and we'd like to see models of the communications between these cells, specifically between neurons, oligos and Schwann cells, between T cells and oligos and all of this is sort of the stuff of fantasy today but tomorrow may be approachable with some of the underlying foundations of the methods that you see here. But in order to do it, I think we're going to have to get a bit more structured about the way that we combine these results together. So we need software platforms that let us do this and we need them to be open software platforms. So what I talked to you about today was a few views of some models that exist at different scales, right? So at different spatial scales and it obviously combined very diverse different models, algorithms, a lot of very different code. It also involved very different time scales and so the challenge there is that you need to be able to account for the fact that these models are very different and very heterogeneous. So in order to do that, we need to have pretty robust software infrastructures and the keynote speaker this morning actually commented on the fact that this is a challenge for academia to do. So how might we go about doing that? It needs to be open source and it needs to, I think, rely on some of the standards that are used in industry to build modular code that isn't necessarily the purview of scientists but has been done in industry. It needs to use the best software technologies that are out and it needs to be upfront about forming a collaboration between scientists in academia and engineers that are kind of more industry minded. People ask me sometimes where I am if I'm leaving academia or if I'm staying in academia and I think that what I like to tell people is that we need to have folks who really have one foot in both because in order to solve problems like this we need to have all the benefits of academia where there's a stimulating environment, there's unbound exploration but we need to have kind of the hard-nosed project management that comes from industry. So the challenge comes down to translating what comes out of papers like the ones that I showed you into well software engineer artifacts is hard and that's where we get to community and that's why collaboration is such an important and critical part of the enterprise that we have to do going forward. So the basic progression kind of looks like this. So there's a physical reality of the biological mechanisms. So we sort of look at that with anatomy, there's stuff going on and then scientists are needed to look into that and pull out some principles of how that relates to physics. How that relates to actual underlying mechanisms and they'd write that up typically in papers and sometimes those in the computational realm actually take those observations and turn them into some kind of a code implementation. But we're not done there when we have a code implementation of those physics. We need to, so I showed you an example of that before we need to actually turn that into a software module that interoperates in scales on a platform. It's the only rational way that we can start to combine these things together. So we've been working on open source platforms. One of them is called Jopeto. You can check it out at jopeto.org. It's very early days, but there's something you can download. That is playing with this idea of how do we make things that are, that can be developed independently but then hooked together into modules. Because what's needed is that these different pieces need to be able to integrate with what's there today and what's there tomorrow. And the basic stack of that would look like this. Built on standardized modeling languages. There'd be, there's a core framework and then there are different modules that focus on having these implementations of these algorithms that come from a diverse set of sources. A layer that combines these together into a simulation. Allow an API layer and a web-based access layer. And then on top of that, apps that let you do different kinds of investigations like defining a simulation that you're interested out of those pieces to do in silico experiments to ask questions and to make predictions. So what we'd like is models like the ones that I showed you today that essentially fit into this platform or this one, fit together into this platform. And so in order to do that, the last piece of the puzzle is that we can't just build a software platform and have everybody magically use it because that's not realistic. Really need a community that pulls all those pieces together. And so for that we have to think about a little bit of social engineering, how to make that happen. We need to think about the fact that there has to be a research phase where computational sciences are doing data gathering, they're developing models. They have a lot of technological freedom, but we also need to partner those scientists with software engineers that think about this as a problem of requirements gathering, of preliminary, of taking those requirements and turning them into an external product to define a testing strategy. And then there needs to be, so after the research phase there needs to be an engineering phase where now the software engineers are taking the lead, they're implementing the testing strategy, they're extracting building blocks out of those algorithms and they're making sure that the platform is robust. And the computational scientists are still important here. The scientists are still needed to provide feedback and crucially ensure that these tests are in fact valid and that they're providing progress scientifically. And both need to combine in a collaboration. So the way to do that of course there's the problem that these folks are cut from different cloth, they have different priorities, they use different tools, they have different methods, focus, background, and of course requirements are constantly changing and we're constantly learning new things about the biology. So we have to be upfront about the fact that we need to collaborate and we need to, and there are practice, there's software that help us do this. Some of those have already been talked about quite a bit today. I think even mailing lists go largely underrated in terms of how much they can do to bring folks together on the same page. And of course there's GitHub, which we talked a lot about as one of the key, I think platforms of the future of doing this going forward. So this is sort of the combination of the pragmatic approach. We have to have a research phase, we have to have an engineering phase and we have to have communication going between them. So finally, just to kind of put all that together, so I've given you just a snapshot of what's there today in multi-scale modeling. I've told you how we can try to move that forward with software platforms, but we can't do it without building out a community that's able to think about this single picture and put those pieces together. These are the pieces from which we have to build integrated disease models. So thank you very much.