 How many of you raise your hands if you have one of these? Keep your hands up if you're quite attached to it. Good. I'm attached to mine as well. So usually when we think about brains, we think about the naturally occurring kind. But what we're going to talk about today is the prospect of building artificial brains. And the only reason we can really have this conversation today is because there are two fields that are exploding right now and are on a collision course with one another. On one hand we have neuroscience, the study of the brain, and on the other side we have technology, and in particular computer science. Now it might seem weird to sort of connect technology and neuroscience, but it's actually something we've been doing for a very long time. So if we look through history, we've always looked at the mind and the brain through the lens of the current technology of our day. So in the era of Descartes, hydraulics was high technology. And Descartes imagined that there was an animating fluid that would flow through the body to move the body. So using technology to think about as a metaphor for our mind, move to the era of Freud, steam is the technology of the day. And we start thinking about the mind building up pressure and letting off pressure and start thinking about stoking the engines of cognition. Fast forward to the electronics era and radio, we have crossed wires and being on the same wavelength. Our language follows our technology. And today, hands down, the technology of the day is the computer. From its humble beginnings, mid-century of the last century, to today where we all carry computers around in our pockets and we slavishly look at them all the time, now computers are the lens through which we see our brains. And these metaphors can lead us astray and lead us to think the wrong things about the brain. So if you think about a central processing unit being separate from memory, that's not actually how our brain works. And as appealing as it might be to get a memory upgrade to our brains, that's just not the way it's going to happen. But I would argue that this metaphor is very different. And the reason is that computer science is a field actually of applied mathematics that lets us reason about the algorithm, which is what we're computing, separately from the implementation, which is the hardware we use to do that computation. And what computer science does is it gives us an equivalence where we can think about different kinds of algorithms running on hardware that maybe wasn't the original hardware that it ran on. If we understand the algorithms of the brain, can we think about running that on hardware, other hardware that we have, silicon hardware? And this is a quiet revolution that's happening right now. So there's something called deep learning or neural networks. It's actually quite an old technology, but in the last five years it's been making incredible strides driven by the availability of incredibly advanced compute and the availability of tons of data. So just five years ago it would have been unthinkable that a computer vision system would be able to recognize an object except on a sort of blank background. But nowadays this is a system from Andre Carpefi and Fei-Fei Li's lab. Now we can have computers that look at a picture and can actually build a caption. So here this computer just looked at the picture and generated the caption, a man in a black shirt is playing a guitar, or a construction worker in an orange safety vest is working on the road. So this is amazing what's happened in just the last five years. We also have very high profile things like Google DeepMind's AlphaGo that beat the world grandmaster and go. This is basically one of the last games that humans were any good at relative to computers. It's very exciting. And then we even have computers starting to make art. It's not all good art, but we can start to see that all these domains that we thought as being solely human are now increasingly being encroached upon by machine learning systems. And of course this has caused a tectonic shift in the field. So there's been massive investment by industry into this field. Billions of dollars from the likes of Google and Apple and Baidu. Basically an entire academic field has been privatized and brought in. And sometimes I feel like I'm going to be the only person left studying this. And if you're a representative of one of these companies and you'd like to buy out my lab, we can talk later. Of course not everyone thinks this is a good idea or this is a good thing. So Elon Musk is famous for saying that building artificial intelligence is equivalent to summoning the demon, which is awesome when your life's work is compared to summoning the demon. But thankfully there are other people, cooler heads that have commented on this, like Stephen Hawking saying that artificial intelligence would end mankind. But I'm here to tell you that we're not quite there yet. So this is an image from the 2015 DARPA Grand Challenge for robotics. And basically what this is is robots that have to operate in environments that are uneven terrain. And you know this is hard and I'm not knocking any of these robots. These are amazing robots. But a lot of the things we take for granted and we think are simple are only simple because we have the solution to the problem in our heads. And evolution gave us our brains and that's what we have. There are also other examples. So before I showed you these wonderful captions that seem miraculous that these computers were able to generate. But if you dig a little bit deeper and you choose your images carefully sometimes you find funny things. So this image is a man riding a motorcycle on a beach which sounds like fun. This is an airplane parked on the tarmac at an airport. I would say that pilot needs to be fired. And a group of people standing on top of the beach which sounds like a fun day out on the weekend. So there's a sense in which these systems are truly amazing and I don't mean to belittle them but there's a sense in which we haven't gotten the whole story yet and there's something still missing. These systems aren't really understanding the way that we conventionally think about understanding. So what my lab does and what I'm interested in doing is going back to the brain to squeeze out some more inspiration. What are we missing from the brain that we can build into our artificial systems? And luckily for me around 2014 a big fish got interested in the same problem. So IARPA the intelligence advanced research projects activity which is the high-risk high reward arm of the intelligence community of the United States which is analogous to DARPA which is the defense version and they're famous for funding the creation of the internet. They started a program that was basically right up my alley. They basically proposed my research program in a program called machine intelligence from cortical networks or microns. And the goal of microns is three-fold. One is they asked us to go and measure the activity in a living brain while an animal actually learns to do something and watch how that activity changes. Two, to take that brain out and map exhaustively the wiring diagram of every neuron connecting to every other neuron in that animal's brain in a particular region. And then third, to use those two pieces of information, those two experimental data sets to build better machine learning, to find what deep learning and neural networks today are missing that we can close that gap. So let it never be said that IARPA is unambitious. This is an incredibly difficult thing they've asked us to do but fortunately I was able to put together a dream team to work on this. This is an enormous undertaking. So we are crossing 12 labs in six institutions with a heavy concentration of work at Harvard and MIT. We're going to work on this for five years, $28.7 million and by the time we're done we'll have collected two petabytes of data which is one of the largest neuroscience data sets ever collected. Across this team we have expertise in neuroscience, in physics, in machine learning and in high performance computing. So this is really a moonshot effort of the ambition sort of scale of the Human Genome Project to really take a real crack at reverse engineering the brain. So I'm going to walk you through a little bit of how this goes. The experiment starts on the second floor of the Northwest Labs where my lab is and this is going to be a sort of unusual epic journey that a brain is going to take. So we start not with humans but with rats. This one's slightly larger than life size on the screen. The reason we're looking at rats is we need to walk before we run. We're not ready to do this experiment with humans yet. Finding human volunteers is also somewhat challenging because we take the brain out as part of this. So we start with a rat. In many cases rats born in my laboratory for this purpose. And if you think that rats are dumb, I just want to share with you an anecdote. A few years ago a group that was studying invasive species released a rat onto a deserted island with various pest control measures planted on the island and they put a radio collar on the rat to try and test how easy it was to eradicate invasive rat infestation. So this was an experiment they were interested in sort of the ecology of the situation. And this experiment went for a while. They tracked the rat for a while and so it was on this island here. After a week the radio collar signal disappeared and they went. They scoured the island. They couldn't find the rat anywhere on the island even though the island was covered in traps. And it turns out the rat had decided to swim on the open ocean to an adjacent island and was later found having swam several miles through the open ocean. So these are scrappy creatures. These are not dumb animals. And what we want to do is we want to take that scrappiness and that intelligence and use it to help understand how that learning happens and understand how that scrappy brain works. And we do that in a controlled setting in my laboratory. So these are where we train rats to do tasks. So this is basically a video arcade for rats. So each one of those boxes is a computer controlled training rig. And we put the rat in and then a computer takes over and trains the rat to do pretty much anything we want it to do. So this is what it looks like inside. And you can see there's a little lick tube here that the animal can lick to give us responses. We have some sensors and then there's a monitor. And what we can do is we can show the animal different stimuli or different objects on a screen and then we can train them to do different things. And then we can ask how does that brain look before they learn how to do that task versus how does it look after they do that task. Here's a little video of a rat doing a task. So just to orient you, here's the animal's nose and you can see the rat's happily looking here and then objects appear on the screen and when the animal licks the tube and then when he makes the right response he gets a reward of liquid, of juice that he likes and when he gets it wrong he gets a little short time out. So we can basically train the animal to do these video games and we can ask what changes in the brain when the animal knows how to do the task versus when the animal doesn't know how to do the task. We're not just interested in training rats even as fun as that is. What we really want to do is we want to look at the brain as it changes. So this is a rat's brain and we want to look at it while it's still in the animal's head while the animal's actually doing something. So we need a technology to be able to sort of peer inside the brain and that's what this is. This is a two-photon excitation microscope. So this is a microscope that's powered by a very powerful invisible laser and we shine it into the brain. We actually have the world's fastest two-photon microscope now from our collaborator Ali Pasha Viziri that can basically record movies of the activity of large numbers of cells in individual cell resolution and see the pattern's activity. So this is what this looks like. So you can see these flashing green dots. Every time one of those flashes happens, that's a neuron firing in response to something in the environment. So you're watching a rat, or in this case actually this is a mouse, having a thought. So we can actually see thought and look at the pattern's activity and we can see how those patterns change as we go from an animal to how to do something to an animal that does know how to do something. Now we're going to go one step further and we're going to take that brain out and we're going to basically reconstruct all of the wiring between all of the neurons in the brain. So we take the brain out, we soak it in heavy metals, in this case, osmium, and then we put it in a FedEx box and we ship it to Argonne National Laboratory and in particular we ship it to the advanced photon source. This is an accelerator ring that slams an electron into a filament of metal and then produces incredibly bright, brief pulses of X-ray radiation. And this is basically the world's most advanced CT machine. So if you've gone to a hospital and had a CT of your head, perhaps after an injury, this is basically the same thing, but on an incredibly small microscopic scale. What this lets us do is to see inside a piece of the brain. So if we have a cylindrical sort of core of the brain, we can look inside of it and spotting it, and then we can look and see every single cell in the brain. We can see some of the vasculature, the blood vessels that serve this, and then also some of the wiring. And this gives us a high resolution picture of the brain that we can orient ourselves. But that's not enough because we, IRPA asked us to actually figure out every single connection and every single wire between every neuron in the brain. So what we need to do is we need to put it back in a FedEx envelope, send it back to Cambridge to do something called Serial Section Electron Microscopy. So here we want to actually see individual connections, and these are incredibly small. These are so small in fact that you literally can't see them with light. The wavelength of light is too big to interact with how small these things are. So we need to cut them, it's sort of like imagine a big bowl of spaghetti but on a nano scale. And we need to slice it up into tiny slices and then image it, and in this case we use electrons to image it. Now is the world's most sophisticated deli slicer. So this block here is a brain, a piece of a brain that's been embedded in plastic and then it's slowly carving off a slice of the brain which is then being collected onto this tape. And to give you a sense of how thin these slices are if we blew up a human hair so just a hair out of her head it's about 20, 30 microns wide. So this is sort of a very, very, very zoomed up picture of the shaft of a hair. This is about 10 microns this white bar so about a hundredth of a millimeter. And then if you wanted to see how big blood cells were, that's about how big blood cells are. And then if we zoom in even further then this line is how thin the slices we're cutting with that deli slicer are. So we're cutting 30 nanometer thin slices so there are 30 billionths of a meter thin. And then we put them on tape. So basically we have miles of tapes that are collecting these sections of this brain and then we spool them up Jeff's lab cuts them up and then put them on silicon wafers and then we just have a catalog of this animal's brain. So every piece of their brain cut into 30 nanometer thin slices and then put onto these wafers and then we image them in this which is at the time when it was built the world's fastest electron microscope which then images at 4 nanometer resolution are the cubic millimeter of brain. And this is what the images look like and then they can be reconstructed so we take all of these images and you can see we can identify each little sort of thing here so what you're seeing in these cross sections are individual wires going from one nerve cell to another nerve cell in the brain and then by using computer vision techniques we can basically reconstruct all of these pieces. So then we take all that data almost two petabytes of data and it goes to one summer street above the Macy's in downtown crossing in Boston. It turns out Harvard rents a data center space there and actually just went to visit them recently and the final resting place of this animal's brain then at Harvard is this. This is a two petabyte storage array so this is a bunch of hard drives we're storing what's remains of this animal's brain in digital form there and then from there IRPA wants us to deliver the brain up to the cloud and then we take a arcade switch to the internet to infrastructure so this is like the fast, fast internet and we upload that animal's brain to the cloud. This idea of brain uploading is sort of captured a little bit of the sort of popular imagination so time and focus and the magazines have started to latch onto this issue what if we could upload our brain maybe that's a path to immortality. Back in the 80s William Gibson wrote a book called Neuromancer that sort of explored these themes of brain uploading and this is an art film that have explored this idea like uploading the brain and what I can tell you is that way before humans upload their brain it's going to be rats that get their brains up into the cloud first. This is interesting too because it interacts and so people are taking this idea so seriously that here's an example of a woman who recently was dying had a terminal disease and she decided that what she wanted to do was to preserve her brain in the hopes that people like us would figure out other techniques to preserve her brain is what we use to preserve our rat's brain. Now if you're excited about the idea of uploading your brain I have good news, I have bad news and I have neutral news. So the good news is there's nothing in principle that stops us from doing this. Now there are scientists who if you ask them they'll say that's crazy we can't possibly upload brains now we're going to happen it could happen I'm just going to put that out there there's nothing in principle that stops us to digitize a brain. Now the bad news is we have no idea how to do that yet and it's going to be a long time before we figure that out and it's not even clear that we're collecting all of the data we need from the brain to do that but these are the first steps this is what the first steps look like towards understanding enough to be able to take a brain and put it into digital form. Now I also promised you neutral news and the neutral news is well before we get anywhere close to thinking about uploading a brain many other things are going to happen first they're going to have huge impacts on our world so this notion of the fourth industrial revolution that's been very prominent at this meeting you know if we can capture more of what makes brains you know smart and adaptable and able to learn there's a huge fraction of employment that's just not going to stick around so if we look at jobs like cleaning factory inspection you know lots of different kinds of factory automation jobs and the ability to see the world and interpret it correctly and the ability to use your hands to enact something in the world those jobs are gradually going to erode and go away as we start to build robots and we're already seeing this so things like the Roomba for cleaning and industrial robots you know you might think of these as sort of the insect brains of automation there's not a lot of smarts here but there doesn't need to be a lot of smarts and then already we're starting to see much more sophisticated robots even since that 2015 video where robots are falling over already the robots are getting a lot better and we're starting to get more flexible robots that are made to work with humans so as we start to learn what we're missing in our machine learning technologies we're going to see a big shift in how employment works and you know one of the areas that's super hot right now is self-driving cars I would submit that the brain power of a rat properly applied is sufficient to drive a car I'm not saying that people who drive cars are rats please but you know a rat has a lot going on in its brain there's no reason the car needs to chase cheese if we understand how this works we can start to tackle these problems the urban driving problem is quite difficult and I think it's going to be a long time before we solve it but highway driving is perhaps closer and within reach and people are already starting to look at trucks you know having self-driving trucks deliver goods in a more efficient way unfortunately if you look at a map sorry this is a very US centric view because I'm from the US but if you look at the most common occupation by state in the United States quite a few states have truck driver as the most common occupation so as we start building systems that can replicate what our brains can do we're going to have to find something for those brains to do the good news is we've done this before so if we look at a plot of the percentage of the American workforce that's engaged in agriculture back in the 1840s about 70% and we're basically taking that down to almost zero this time maybe we'll be different but we have to think about how these technologies as they increase are going to affect things and I think one of the reasons why I'm excited to be here at the World Economic Forum is because I think we need to start dialogues with many different kinds of stakeholders with people with many different kinds of expertise and already in this project we're engaging neuroscience we're engaging physics we're engaging computer science but we also need to start engaging law we need to start engaging business leaders we need to start engaging policy and ethics and I think that the challenges that lie ahead the opportunities are enormous that this technology enables but we also have to think very seriously about those consequences so thank you for your time