 Now we'll invite up Randall Cohn. Randall Cohn introduced the multidisciplinary field of whole brain emulation and is the lead curator of its scientific roadmap. He is founder of the foundation carboncopies.org and has directed research in neural engineering. We're very pleased to have him speak next. I hope I understand the clicker here. There's apparently also a laser. Yes, there is. Okay, nice. So first of all, I'd just like to mention that I'm coming to this conference from pretty much an entirely secular background. Neither of my parents considered themselves religious. Although my mother had the wisdom, she felt at least it was a wisdom and I think it was too, to say that I should attend Protestant religion classes every week for the last seven years of my high school, I guess in Europe it's seven years there. Here that wouldn't be high school. But so that was my exposure to it at the time. That said, I'm gonna get into this talk which is partly technical, partly about, I guess a managerial problem of how do you deal with a large project and partly about insights into purpose or why would you do this? So let's see if I can get through it quick enough that we have some time for questions. And the title here, Supporting the Complex Requirements of a Long-Term Project for Whole Brain Emulation. I should start with a few basics even though I've given those basics in so many talks before that the best way to get a really good overview of that is to just Google my name or go to my website carboncopies.org and you'll find everything there. So I'm gonna go through it relatively quickly. Then after that, I'm gonna talk about this idea of it being a, let's see if I get that right here, a complex project with many requirements and a very long-term project which is something that is sometimes not suitable for the kinds of project or program support that exists these days. And then also a method proposed to deal with that and some examples from my own experience there and then as an outcome of that, a piece-wise and sustainable path to getting to whole brain emulation. So as I said already, I've talked about the basics quite frequently. Up there, this website is a good place to go to if you want to get a nice overview both in terms of a complete transcript of a talk and as well as a talk that gets into all the details and gory details of the technical problems there. But to begin, I think it's very important to make a few terms very clear because people say a lot of things like uploading, downloading, side loading, whatever. It's very vague and I like to be as concrete and clear as possible. So one of the things that we started out with when I got into this seriously was to try to define a few terms that tell us what our objectives are and what methods we're trying to use, et cetera. You can still debate the terms. Nonetheless, this is what we kind of agreed on which is the objective is what we call substrate independent mind which has this nice abbreviation SIM that's just useful to have that abbreviation because we think of simulation or the SIMs or something like that. And it means trying to get to something that it's not really independent of something underlying but it's independent the same way that platform independent code is independent of a specific computer platform but it needs to run on some kind of computer platform. So a substrate independent mind can run on a number of different platforms other than the biological one. Then there are several ways you could think about getting there but the one that has had the most study, the one that is currently the most feasible that I can actually talk about in detail is this thing we call whole brain emulation. The idea that if you can emulate to a sufficient degree what is actually going on in the biological brain and you can capture those things that you consider important about what's going on in our mind so that you satisfy what you call your criterion for getting to a substrate independent mind then you have accomplished that by doing whole brain emulation. Now we still use the term mind uploading just to kind of allude to that process of transfer that you're not just creating an emulation somewhere that isn't you but that you're actually you could talk about transferring yourself into this substrate independent mind. Now what's the purpose behind this? I have a personal one of course which is that I basically started out a long time ago simply occupying myself with everything that I found interesting. I like to write, I like to compose music, I like to explore and travel, all sorts of things. If you like a lot of things you very quickly notice that you run up against many restrictions, many hurdles, many limitations. Some of them are time. You know we just don't have a lot of time and our lives are also finite. Some of them have more to do with capabilities such as what are we actually able to envisage? What are we able to come up with? What are we able to process? Those sorts of things. And so trying to understand everything or trying to create anything or being on a path to those things that sounds like a very satisfying thing for me to do. That's something I enjoy a lot and so I was thinking about what can you do to get there? What is the best way to do that? And if you really want to enhance your abilities, your capabilities as well as your time if you want to enhance all that and expand it, you need to have access to the processes that are there. The biological brain doesn't really allow for that very easily. It's not built to give you perfect access. Now if you end up creating interfaces to the brain and ultimately emulate the whole thing in a way where you do have access, well then you've got the flexibility to start in making things happen. There's also a argument that isn't so personal but is more an argument for our species which is that right now we seem to be very well suited for the environment we're in. This is one particular place and one particular time. In this environment, selection kind of pointed out that yes, humans survive, humans know what to do. They can build the things they need to survive but that doesn't automatically mean that we're equally well suited to challenges that are coming up elsewhere or at other times and Darwinism is a really brutal thing because selection, natural selection or otherwise, it means lots of losers and just a few winners and it does not mean that you get to evolve into something else, it just means that there's a bunch of things and some of them get selected and they win, they continue procreating and others don't. There are always a lot of losers. So changes are coming and we may be even causing them. We could think of artificial intelligence, we could think of space travel, we could think of the environment changing and for that I think we need to make the species more able to thrive, able to excel wherever they go. You need to be excellent at adaptability. The most adaptable species gets to survive and I think we have a responsibility to work on that now because right now we have international science, large infrastructure there as well as still a pretty big economy available to us so we should be doing it now. Now what would you do actually? Well, we have an example that substrate independent minds may be possible. The best example that I know of currently are neural prostheses that exist. Now some simple ones are the cochlear implants and retinal implants that are being worked on or that already are out there being used. They're easier because they deal directly with signals coming from the outside, you know what to interpret. The best one to compare with what's going on in substrate independent minds or whole brain emulation is the hippocampal prosthetic that Ted Berger's group has worked on at USC. It's a replacement for a piece of the hippocampus called CA3 in rats anyway that deals with learning new episodic memory, being able to remember things as they appear. So right now you're listening to my words, those are separate things that are coming in, you need to remember them in a certain order, you need to remember this event. If you don't have an active region like that, then you can't do that anymore, you cannot make any more of those memories. Now Berger has shown with a chip that replaces the function of this area that he can get that to work again in rats and now also in monkeys and primates and it's only three years now until they get to start doing that in human trials. So this is quite real, it's a good example of a small neural prosthetic. How was it done? It was done by taking a look at what was going into the system and a look at what was going out of the system, input and output and then knowing what sort of signals you might be interested in. In this case he was interested in these things called neural spikes whenever a neuron activates and the timing of those spikes. If you were interested in looking at something else like a chip that you don't know and unknown system that is a computer chip, you might be looking at ones and zeros. Try to understand that. It's relevant to know which signals you're looking at. For example, in a chip you're not interested in what cosmic rays are doing to that signal and you may not be interested as output in exactly how the chip is heating up certain areas underneath the chip or something like that. You're more interested in the ones and zeros. The same way you need to learn what it is that you're actually looking for in the brain or in a certain part of the brain to know what to try to identify here in this process called system identification. So what Berger did is they did study that and they were able to come up with this thing called a transfer function that explains how the input gets mapped to the output and when you put that in a chip, it works. Now this is not the same as trying to make a simulation like you do trying to interpret what's going on in piece of the brain and build an artificial intelligence because you're not trying to abstract it. In fact, you're trying to cover all of the latent function that's there. You want it to behave the same way as that piece of circuitry did in the biological brain. Now if you're going to look at the entire brain, it's kind of a big thing. It has about 100 billion neurons, actually fewer than that, but everybody says 100 because it's a nice round number and it has a lot of connections. If you were to try to do what Berger did by just looking at the entire brain as a black box, you could study those inputs and outputs for a long time and you'd probably miss a lot of latent function that's in there because there's just too many variables. You cannot capture it in the same manner. But what you can do is you can try to break it down into a lot of smaller systems. So what you try to do is break it down into systems small enough to interpret so at least as small as the one Berger looked at, perhaps smaller, maybe single neurons and then how are they all connected? So you know how all those subsystems, how they communicate with one another. When you have that, then you can start looking at each one of the individual pieces. That's the functional problem here and try to find out what is going on there just like Berger did. If you can describe that, you can build the whole thing. That's why when we look at this as a big project, the systematic manner of approaching whole brain emulation is to say, well, we need research in four main pillar areas here in the area of structure, the connectome, finding out what all the connections are. And usually we talk about using electron microscopy or something of that nature so we can really follow the physical connections rather than doing the connectomics that came up in a previous talk where fMRI was used because fMRI is relatively slow and the resolution's not very good, plus you're only observing it over a certain period of time so you're not gonna capture things that didn't happen to pop up because of the activity we're looking at. So it's probably not quite sufficient for these purposes. So we look at things like electron microscopy and there were some really great results coming out of work between 2008 and 2012, five years of development there, where we ended up with two papers, one by Brighman et al, one by Bacchadale that showed that you can actually interpret by looking at the connections between cells in the visual cortex or cells in retina what their function is, what sort of signals they're sensitive to. So you could actually determine something functionally, you could determine what's going on in that system by looking at this connectome. Now the functional part here is the troublesome part at the moment because our tools aren't good enough to get that data from all of the neurons at the same time. But I'm gonna get into that a bit more in a second. That's a really cool air of development right now. Then when you have that, the other pillar here is you need to be able to represent what's coming out of all this data in some way. You need a model that describes what's going on in all those little subsystems and you can perhaps implement that in some kind of hardware. When you've done that, then you still need to go back and test all of your hypotheses. You need to validate this, you need to have applications and that means this whole process is iterative. You keep going through it until you've got your success criteria met, until you have what you're looking for as a whole brain emulation. Okay, that was basically, that's actually the basics of the whole thing and you can get way more details about that elsewhere. Now I wanted to look at the problem of this big mission here. It's a long-term project, a very complex one with many requirements. You can take all of those goals that are part of these four pillars and say they have many requirements, we can break that down into problems and say there are many different possible solutions you can pursue. And then you look at what's going on in the world, what are the actual activities going on, the R&D that's happening elsewhere and so well, how do we connect these two? One thing you notice is that the R&D activity is driven not so much by these long-term requirements that we have but it's driven by short-term interests by things like what does somebody get a grant for? What is the interest of a company that's hiring someone to do research in some area? Many other short-term interests. But what we see then, we could describe this as a field of vectors where every lab is a little square here and they've got a little vector of activity going somewhere. If you wanted to add them up, you'd get some trends. You could look at trends of what's really going on. But there's really no guarantee here that the trends are pointing at our goals. And if the trends are actually going somewhere else, for example, if everybody's working on AI, what happens in the future if that stuff becomes really powerful but we don't really improve our species? You couldn't imagine what sort of scenarios might occur. Maybe nicer if we evolve along with our machines. So there's a good reason to want to have these vectors pointing in a way where the trends are good for us. So what can you do about this? An approach that I've been using, so this is from my personal experience here. And I created a company around it or an organization around that called carboncopies.org is that I've started to maintain or have been maintaining for several years a roadmap of all of these problems and possible solutions and what's going on in various labs. And from that, tried to look at what the trends are, et cetera, looked at both the top-down needs in terms of how we break down the problem into small problems and bottom-up feasibility. And I've tried to encourage those meeting requirements and filling gaps by emphasizing certain activities that should be done. It's a big picture vigilance that I apply to it. And this has led, I'm just quickly giving you this as an overview, to say in these four pillars that I showed before, structure function, emulation, application. You've got a bunch of people active, different labs, doing different kinds of techniques like using EM to do connectomics or having, working on electrodes and stuff like that. And you can find how they can best work together and actually talk to them all and get them talking and see that there are some results from that. So for example, let's see here, what happened is that there was a need, as I saw in the roadmap here, for more work in that area of getting high-resolution functional activity going. A few years ago, I invited this guy here, Ed Boyden, to come around and give a talk at the place where I was working. And we started talking about this thing called a molecular ticker tape. Basically a biological way to try to record activity in a cell that I'm not gonna get into detail now because I've done that elsewhere. And he's known for doing a lot of recording stuff. He came up with optogenetics, a way to stimulate cells to either be inhibited or active using light. And we started talking about this stuff here, the molecular ticker tape. And he took that back and they started a group around this brain activity mapping. And something grew out of that. Then a little later on, this guy here, Jose Carmena came along and said, I'm interested in whole-brain emulation all of a sudden. I've known you for a couple of years. I didn't used to think it was great, now I do. And then I start talking to him about him. He had known Ed Boyden, but he didn't know that he was also interested in whole-brain emulation. They get together, it turns out Jose and his fellows have some work where they're doing tiny wireless neural interfaces using ultrasound, which doesn't heat up the brain as much when you send signals that way. And they together, with a few other people, form this thing called the POBAM, the Physics of Brain Activity Mapping. And that group looks specifically at how do you get data from every single neuron at one millisecond resolution, temporal resolution. And eventually they make a proposal that they send off to the White House, which is the brain initiative that the Obama administration took over. Now I can't possibly take full credit for that, but I was involved in the process and I think it helps to give these little nudges. Now if you look at this stuff that came out of the Carmina Lab at El, they made something called Neural Dust. This is already pretty cool. It's a tiny thing, 126 micrometers in size and they're gonna shrink it down to about 20. They put a little tail on it, that's only five micron in size, so you can get to even smaller scales when detecting things. And they're gonna do animal trials really soon. You see similar sort of prototypes now coming out of Harvard and MIT as well, where they happen to use infrared instead of ultrasound. There's another new idea coming along using some bio-vectors to pull nanowires close to the systems you wanna go to and cross-disciplinary work going on in that group, coming up with hybrid solutions for how to get huge numbers of these things into the system. So this is promising quite quickly and that's just beautiful to see. Now over the longer term, I can see this as a strategy towards whole brain emulation by thinking of it as first developing interfaces that are going to help us get better interfaces to the brain, so brain-machine interfaces. They are going to create systems where you have long-term operational interfaces that can go to very many neurons, maybe every single neuron, and not cause harm to the brain in the process. So you can use this in vivo, eventually not just in patients and in animals, but in people. I mean, once we don't have to fear these devices, that becomes really interesting. You can put them inside the brain loop, which is great. Better than these simple brain-machine interfaces with EEG that basically can't do much better than what you could pick up by checking someone's eye blink or something like that. These have really wide channels of data. So you can get to data that is otherwise inaccessible. Say if you recorded in the hippocampus from all those neurons and you stored away which neurons those were that were firing at a certain time, you could store, okay, this pattern fired when I was here today, if I want to re-experience what happened today, stimulate those cells. You could start sorting your memories by date, things like that. You could decide you want to stimulate memories, you find important because you remember when they were and what they were. And that's stuff we weren't able to do before. That's possible with a BMI like that. Or you could deliver data into the system that you couldn't normally get. Things that computers can do really well like parameter fitting or recognizing things much faster and better than we can. So you can deliver information you can then use as part of yourself in a very natural way. But of course you also need to understand better how the brain talks to itself to do this. You need to understand to use an approach like Berger did for the hippocampus, not the way brain machine interfaces are mostly done today which is that you stick in a wire and you let the brain figure it out by training it to use that wire to control something. This is necessary because once you have so many little pinpricks going into the brain, and this is by the way not the way you'd want to do it, this is kind of the example of what you're trying to get away from by building these new interfaces. Then you can't do the training anymore because it's hard for the brain to concentrate on just one of those outputs at a time and it gets very confusing. Actually I'm quite confused as well about the time. Am I running out or have I run out? I've got three more minutes, okay. Okay, so this brings us to what I would consider a more sustainable piecewise approach to getting to whole brain emulation. You develop interfaces, prosthetics, and get there eventually rather than through this clean room approach where we're just trying to figure out exactly how do we scan a whole brain, how do we make a whole model out of that. That's still possible, it's just, you would start doing that rather with smaller things like pieces of the brain, the retina, maybe the hippocampus, or with smaller brains like the brains of the fruit flies, Drosophila. In fact when you look at the interest that there is in that fly, because that fly happens to share a lot of things with us, it's got motor activity, it recognizes things, a lot of the things we do, and you can look at what's currently possible in conectomics and what hopefully will be possible in an equal amount of time, say five years of development for these functional data acquisition methods. You could imagine that maybe in 2018, we could put those things together and start a project to emulate the brain of Drosophila. Now that would be pretty fantastic. So this seems sustainable, if you have a body that attends to the big picture here and iteratively makes these little nudges that tell people this is interesting and that guy's interested in it too and here you can get some funding for it, et cetera. But of course, unfortunately, that sort of work isn't very well received so far. There aren't a lot of incentives that are pushing what we're calling architecting here. That term is actually one that Boyden came up with. Right now, that po-bam group is doing it, carbon copies has been trying to do this, but I've personally experienced that it's very difficult to do at the moment if you're really trying to concentrate 100% of your time on that because the funding comes, the funding goes. It's very fragile. Funding sources right now mostly seem to concentrate on either in academia looking at very extreme specialization in these silos where there is not so much crosstalk in a cross-disciplinary manner or in the outside world, let's say investment, in entrepreneurial ways, it's more of a short-term focus thing. It doesn't really look at long-term projects and their needs, so it's kind of hard to do at the moment in a sustained fashion. But I think that being able to do so is relevant to the long-term success of a complicated project like that. And there are a few other things that I won't get into right now, like for example, being able to look at multiple solutions at the same time and having people share information rather than keep the information to themselves. We have some ideas about how to deal with that as well, but that's still something in the works this year. And so looking at this robust mission design there, it's very important not to have a single point of failure if you're gonna try to make something as complicated as it is. And that applies both to technology, labs and the people involved. You don't want to depend on whether this person is gonna continue doing their research or happens to have a stroke and disappears, horrible thing, but it does happen. And you need the resources for the work to also not depend on a single point of failure. Like a single point of funding or a single point for your instruments that you need is very bad. Now, right now, it seems to work fairly well for the technology, labs and people. The structure is kind of in place, that there isn't a single option, a single solution. But there's still a lot of frailty on the resource side and that still hampers this progress again, like I gave the example from my own personal experience at CarbonCopies.org that happens. So this robustness is really a goal for this year to get that done. Just summarizing now at the end, so substrate independent minds, it's a way to improve ourselves while we improve our machines. It's a way to keep up and not just basically have to sit by while machines have all the new fun of being able to experience things that we can't experience. There is also the matter that trends and exponential curves while they're really cool don't automatically lead to goals. So even if Kurzweil's right, it doesn't necessarily mean that the things we want to see happen in the future are exactly what happens. And to get something big like this to happen, it needs to be monitored and nurtured. Right now, after several years of sustained effort, there are actually labs that say Holbrand emulation is a good thing to do in our lab because we can test all of our neuroscience hypotheses if we can build the thing. So they're starting to accept that. A few years ago, you go into those labs, they'd say, I'm sorry, I mean, I'm interested in neural architecture but that is science fiction. We can't talk about that yet. Now it's okay. We've got this proof of concept for the connectome stuff and we've got work going on for the functional side. So five years to something really interesting, maybe. Robustness needs some improvement still. And there's one more thing that's on the table kind of which is that I've been writing a Holbrand emulation coffee table book for a while and I really want to get that out this year or hopefully as soon as possible. I think it should help people to digest the frequently asked questions over and over again and to just get down to the nitty gritty and move forward so we can have concrete discussions about this. Thank you.