 All right, I'm going to talk about Brain 2.0. I am a futurist. I'm a techie. Whatever. What does the futurist mean? I write sci-fi. I make up lies for money, and people pay me. So I think that's kind of a fun job. I also teach this place, Singularity University, where we talk about accelerating tech, exponential tech. And that's basically what you guys do. And then how does it get applied to stuff? So you saw a picture of my books. I write books. In these books, in the sci-fi, the premise is you can swallow a vial of the silvery stuff, and it's nanobots that get into your brain, attach to your neurons, and broadcast what your neurons are doing basically over Wi-Fi. So that gives you an API to your brain, and people can write apps. People can hack it, which is very bad, et cetera, et cetera. But it also gets used as a communication device. People can transmit from person to person what they're thinking, what they're seeing, what they're hearing, and form a sort of weak telepathy, is what Cori Doctor called it. But you guys are probably already very familiar with this idea, like the idea that you could download video and audio and sensation into a human brain, especially if you're up to date on the latest and greatest philosophers, like the work of the great American philosopher, Keanu Reeves, who famously, when the computer downloaded info into his brain, said, I know kung fu, right? But I want to tell you a little bit about what inspired me to write this sci-fi because there's real science going on. This is a device called a cochlear implant. Anybody know someone with a cochlear implant? Right. So a cochlear implant looks like a hearing aid, but it's not. A normal hearing aid cleans up sound and then projects more sound. Sound is vibrations in the air. That moves hair cells in the inner ear, and that makes a sensation. But if you have zero hair cells in the inner ear, no amount of more sound cleaned up, amplified can help you. So the cochlear implant picks up sound and then transmits it digitally into the nervous system as a set of coded electrical pulses straight into the auditory nerve. It's really a cognitive prosthesis, and it means that about 200,000 people around the world right now who had no other way to hear can actually hear sounds, like this little girl or like the six-month-old boy you're about to see hearing the first sounds ever in his life. Here we go. It's coming back on. And he's back on again. See how he turned? Hi, Jonathan. Stop the sucking. Hi. Good. Could you hear that? Hi, sweetie. Could you hear that? So Jonathan there is a cyborg, but he's the cutest darn cyborg you've ever seen, right? This is not Arnold Schwarzenegger. And so this tech is really being developed to help people who have some sort of handicap or some sort of injury or so on. So that's proof of concept of data into the brain. Sensory data sound transmitted basically into the brain. But there's other senses we care about, like sight. This is a guy named Jens Naumann. Jens is 39 in this picture. At age 18, he worked on a railroad, and one day his pickaxe hit metal instead of rock, and a sliver of metal came up and destroyed one eye. Age 19, he lives in Canada. He's an outdoorsy guy. He's out on his snowmobile. Refusing to compromise his lifestyle, he has an accident, and a piece of the clutch flies up and destroys his other eye. So he's 19 years old. He's blind in both eyes. Until 20 years later, this guy, William DeBell, who violated every FDA rule, did the surgery in Portugal, fixed him with this system. This is a CCD digital camera on his eyeglasses here. It's like the one on your phone, only much worse, because it was 15 years ago. It takes the data in. It transmits it on that wire. It goes to a small computer that basically translates the format, and then it is sent up into this jack in the back of his skull, and into V1, primary visual cortex at the back of the brain. Now, let's be clear. This thing has 256 wires. It has 256 data pipelines, basically, that are all very slow. The brain has billions of neurons just in primary visual cortex. So the bandwidth here is minimal, so you would think it wouldn't work. But it gives him what they called limited mobility vision, which looks like this. I was able to very carefully drive and look from my left side to my right side, making sure I was between the rope trees on the right and the building on the left. When I got near any obstruction in the front, I would see that there was an obstruction. I would also see the lack of obstructions. And then when I backed up, I would be able to inspect for instructions there. It was really a nice feeling. So you catch that last line. It was really a nice feeling. Like, I can see. OK. But you'll notice that there's nobody in the car with him. There's nobody in the parking lot. There's no cars parked there. Jens' vision, while he had the system, was terrible. It was 16 by 16 pixel grayscale. You would never trade it in your life. But it was a proof of concept that we can send video directly, digitally, into the brain. We can crack that code. And it was a quantum level up for him from zero. Now he could see something at all. So those are both data into the brain, audio, video. What about data out of the brain? Well, here we have this woman has ALS. She's paralyzed from the neck down. But she is controlling by means of that pedestal on her skull, about 40 electrodes. She's controlling this multi-axis robot arm. And it turns out motivation is super important in learning new skills like this. This is not a natural thing. So you see this grad student is cruelly taunting her with a piece of chocolate. But he's not that cruel. Like, she gets the chocolate. She gets to feed herself. So that's data out where she can control the motion of this fairly complex multidimensional movement. But DARPA, who funds a lot of this, really for rehabilitating injured soldiers, not super soldiers so much, they've taken it to the next level. And the newest version they funded is bi-directional. So this is a robot arm that was funded by DARPA that also sends back touch data, certainly pressure data. So now the person controlling it with their mind can actually also control how hard they're pushing and feel what they have in their hands. So that's data out of the brain and data in. Well, here's more data out. When you look at a picture, your neurons and your visual cortex light up in a certain pattern. That's why the thing that they did for Yen's works. But if we can read that pattern, we can tell what you're looking at or what you're imagining or what you're dreaming. So this is a study where people were put in an fMRI machine, a totally non-invasive scanner. And on the left, what you have is what they're being shown. On the right, what you have is the output of software that is using the data from the brain scanner and nothing else, machine-learned software, to try to match up what they're seeing with a library of fuzzed out videos that it has. And you can tell it's very, very far from perfect. But even with no wires in the skull whatsoever, the software can kind of roughly tell the sort of thing they're looking at. And we all know algorithms are getting better, right? Deep learning has improved a lot of machine learning. So when the newest iterations of this, just with algorithmic improvements, are now so good that if you stare long enough, it can tell what letter of the alphabet you're staring at. So it's data in, data out, audio, video, touch, et cetera. But of course, we're more than just sensory machines. We also process data. We have these things that we think of for ourselves as higher functions, right? Like memory. What movie is this from? Mento, very good. What's the name of the actor? Guy Pierce, you guys are good. What's the name of the character? Lenny, that's very good. That usually stumps people, you got it. So Lenny here can't form new memories. He walks into a room, he can grok what's going on, but a minute later, a few minutes later, it's gone. That's a bit of an exaggeration that people like him do exist, but there are millions of people in the US alone that have some damage to their ability to learn things usually because of a blow to the head and damage to the part of the brain called the hippocampus. So at USC, a team led by a guy named Ted Berger wanna fix this and they have what they call a hippocampus chip. They take rats that have damaged the hippocampus, that part of their brain. These rats cannot learn new things. You run them through a maze, great. You run them a second time, they are no better. They don't remember it at all. But when you shunt in their hippocampus chip, then suddenly the rats can form new memories. Hypocampus chip actually is basically bio-limitry. It's just mimicking the circuit layout of that part of the brain, if you will. But not only can they learn new things, they now get an added superpower because what these researchers can do is they can take all the data that flows through the hippocampus chip and they can record it. And they can take that rat and a year later, which in rat lifetimes is like 30 or 40 years later, they can put the rat in front of the cage again, in front of the maze. They can play back that set of data and the rat will run the maze perfectly as if it had just been there a minute ago and not half of its lifetime ago. So storing memory. Or we can go even higher. These are Orisis monkeys or macaques. They have an implant in their prefrontal cortex, part of the brain that's involved in higher executive function and pattern matching. And they're trained on basically a monkey IQ test. It's called a pick and match test where they see a few images and then later on they get a blur of high-speed barrage of images that have to pick just the right ones and not the wrong ones. So they're tested on this and the chip learns what it looks like inside their brain when they get a right answer versus a wrong answer. Then the monkeys have their performance on this test impaired. It's impaired by giving them large doses of cocaine. So this is no joke. Well, it is a joke, but it's also true. So the monkeys think their performance is going up on this test when in fact it's going down on the test. But the chip can be put into an active mode. And in the active mode, when it sees a pattern of neural activity around itself that looks like a wrong answer forming, it can zap the neurons nearby and intervene and change that. And when it does, it can totally repair the performance degradation on this test from the cocaine. And if you put it in that active mode in a monkey that is not impaired, it lifts their performance on a hundred point scale by about 10 points, which obviously leads to the planet of the super-intelligent cyborg apes, right? Maybe. I think the coolest thing of this and what I chose to write about is the scenarios for communication because computers at one point were used entirely for number crunching, for databases, but now they've really changed the world by becoming this, right? By allowing us to send data back and forth. And this too has been tried. Here's a study where a guy named Sam Deadwiler took two Rhesus monkeys, two macaques, separate rooms that each have an implant in their auditory cortex, part of the brain that controls hearing. And they're in separate rooms that are soundproofed. They cannot hear the other. But when those implants are connected together, when they play a sound for one monkey, the other monkey is able to both hear it and identify it, right? Sort of monkey telepathy, if you will. And this study was funded also by DARPA, by their Advanced Battlefield Communications Program, actually, so you can see sort of where this might go. Or here's the most famous researcher in all these areas, a guy named Miguel Nicoleles. And Nicoleles did this study. He had two rats in identical cages. And in the cages, a light will shine in a certain pattern and that will tell the rat which levered a pool and if it gets it right, it gets fed. But one of the rats gets trained on this and the other one doesn't, right? They each have a prosthesis in their motor cortex that controls the motion of their paws. So one of them just doesn't get trained, it just sits there and doesn't know what lever to pull. When you connect those motor cortex prostheses, when they allow data to flow, suddenly the rat that has never been trained starts pooling the correct lever. Not all the time, but about 70% of the time, starts getting the answer right, despite having not been trained on it. So that's cool. What's also cool is where these rats are. One of these rats is in Nicoleles' laboratory at Duke University in North Carolina. The other rat is in his other laboratory in Sao Paulo, Brazil. Because once you make this data digital, you can send it anywhere data can go. And in fact, they're sending this data over the open internet. Or here at University of Washington, where I live, Seattle, these two guys, Rajesh Rao and Andrea Stoko, they are in separate buildings about a mile apart and they're playing a video game. A first person shooter, a very simple one. But they're not playing it head to head and they're not playing it cooperative. They're playing single player. Rajesh Rao on the left can see the screen, but he has no controller, no data input device whatsoever. He does have this EEG skull cap, though. Andrea Stoko on the right a mile away does not have a screen, but he does have the fire button. And he has this magnetic pulse simulator on top of his brain. So when Rajesh Rao sees a bad guy and wants to shoot, he thinks shoot. And across campus, Andrea Stoko's finger twitches and he shoots. Now, I want to be very clear. It's not that Stoko on the right feels a tingling and thinks, oh, Rajesh wants me to shoot now. No, what happens is his finger twitches and then he realizes he's done it and he's shot. His finger has become sort of part of Rajesh Rao's body. We're doing this in humans. Or the guys I talked about at the hippocampus chip that can store rat memories or restore the ability of rats to learn new things. They talked about for years and finally did the experiment I wanted to see, which was two rats that each have a hippocampus chip. One rat runs the maze, the other rat stays home, but they let data flow. And sure enough, when that second rat is then placed in the maze, it already knows the maze and it runs it as if it had run it before. So how far is that from the great philosopher Keanu Reeves? It's still pretty far. It's proof of concept that we can do this, but we're talking about sort of single transistors if you wanna analogize it to where we are now because there are some serious challenges. Like, this is your brain. How many people here are excited about voluntary brain surgery? I'm looking. I didn't see any takers. Okay, you've got a choice. Come see me later. Normally somebody raises their hand and I'm like, that's consent right there. And then I get to ask, how many are still excited about voluntary brain surgery? Because this is what one of these arrays looks like. Now, it's actually, that's two millimeters, right? So here's what it actually looks like in comparison to something you know. But still, when you're putting sharp, pokey things in exactly the area you want to either get data out of or get data into, it's not really that good for the end results. So a lot of people will say this will never happen. You'll never have someone do this voluntarily. You might help the paralyzed and so on. But we said that about eye surgery too. 20 years ago, there were 20,000 eye surgeries a year in the world. Now there's about two million. And almost all of them are done with this thing called the eczema laser. That's what we use for Lasik, right? Now brain surgery is harder than eye surgery. Is there a Lasik for brain surgery? Well, if you're a zebrafish, there already is. And this is sort of a weird segue. But this zebrafish will end up getting placed in the zebrafish matrix. So they take zebrafish, they put them in a tank, they are locked in place. And then giant VR screens for them, about that big, are showing them images of potential predators and prey and mates. And they study their brains. And in zebrafish, they've gotten just sort of the holy grail of real-time brain imaging. You can watch in these very special zebrafish every single neuron fire, every single time it fires. They're scanning what they use called light field microscopy, a laser that scans across, and genetically modified zebrafish whose neurons pulse with light and change color every time they fire. That was a predator that just went by. So it's great if you're a zebrafish. However, they have some advantages we don't. Their skulls are transparent until they're fully mature, which ours are not. And we can genetically modify them to have this optogenetic stuff. Even so, DARPA is not dissuaded. My friend Phil Alvelda at DARPA runs a program where they are aiming for not 256 electrodes, but in the size of about two nickels stacked, less than an inch across, a million electrodes in a mode that can maintain safely for years inside the human brain. That's where people are pushing. And we see things like this heading in that direction already. This is a silk-based interface. It stretches out on top of a brain, and then it melts into it. The silk itself biodegrades. Electrodes are flexible. Or this has been used, that's been used on rats. This has been used at Harvard. This is what they called a neural mesh. So they don't do brain surgery. They inject this into a rat brain with a syringe. And what you're seeing in the right side of this, that is the tip of the syringe needle. That's the size this is working on. So all of that leads me to think, uh-oh. Well, there's still issues, right? Even if we have the system working well, none of us wants to see this in their brain. Or nobody really wants to see this in their brain. So there are some issues to work out, to say the least, besides just brain surgery. But I want to ask for a minute what happens if and when we succeed. If we actually could do high bandwidth data in and out of the brain, what consequences would that have? And I think we can ask that by asking about analogous technologies. What has happened to the world with the spread of cheap communications devices or even going further back with this ability to spread our writing, right? And what happened was a revolution. This is the book where Isaac Newton published Calculus. This book exists because he had a brain-to-brain interface with thousands of people around Europe, many of whom weren't even alive anymore, who could send him information and then he could absorb it, publish his own thoughts, and publish it back out. That built the first sort of pre-internet internet, if you will, of the world. And we know with Metcalf's law, the value of that network goes up as n squared as you increase the number of nodes in it. So what happens if we have a new modality for person to person communication? What happens is that network grows n squared. Now the biggest concern people bring up is the rich versus the poor. Will this be something where only the very wealthy can afford to have an implant and everyone else will fall behind? It's possible. I think that's worth looking out for. But we've got to remember this guy. Gordon Gekko, Wall Street, as remember this movie, perhaps. This was Gordon Gekko's cell phone. Only the rich could afford technology. This is a Motorola DynaTek real phone. It would take about 12 hours to fully charge. Its standby time was 30 minutes. It had no apps, of course, and had pretty damn poor reception. And now this is the median cell phone user around the world. There are more cell phone users in Africa than in Europe. By next year, we estimate there will be half a billion internet-connected smartphones in India. So we once talked with Digital Divide, but you don't hear a whole lot about it anymore. And this is an augmentation technology. We are in some small way superhuman as a result of it. So what happens to the world as a result of that? We have more people who are smarter in some way and more connected. Well, you get faster innovation. You get more people creating ideas. You get more economic growth. And that's good for people in general. But I think the implications go a little bit beyond that. And they're more than just economic change. They're social change. Sci-Fi is really dystopian. People like to imagine the worst. It makes an exciting story. And the sci-fi image of information technology is almost always this. New information technology leads to oppression and the state using it against you and social control and so on. But the evidence is actually the opposite. The printing press led to the first newspapers that made people more informed. It led to this guy, John Locke, this radical, publishing an idea that maybe we should get along, even if we're not the same religion, which led to ideas like those of having civil rights at all. And that would never have happened if only the princes and kings could publish their ideas. But it does happen when suddenly it's democratizes the technology and anyone can publish their ideas. You see that in the modern age in certain ways too, right? You look at what has happened with Twitter, for instance, Black Lives Matter. It's not that police officers started being worse to Black people after Twitter was invented. They always were, but the democratization the ability to publish your thoughts, video images changed the dialogue. And even in places that try to stop this, the evidence is that they're mostly not working. You go to China and you will see from time to time events like this. There's an environmental protest in a city called Dalian. These people are protesting a new chemical factory being placed in their town and they won. It was scrapped and their protest was entirely organized via SMS. So the ability of them to communicate one to one and one to many changes what the world is like. And the same thing with the protests in Hong Kong. And it's more than that. It lets us suddenly empathize with people we didn't empathize with before. Like, would these guys have been able to get married in an age before social media? I don't think they would have. I think that has changed the world or other social change that we've seen. The legalization of marijuana. Definitely driven fast in people thought because we could communicate. So my suspicion is that every change in civil liberties throughout the world, throughout history has been accelerated by new information technology. That the right information technology has allowed new voices to be heard. It's allowed people to get out a message they previously couldn't get out. It's allowed us to see people we previously didn't see. It's allowed us in some way to see through the eyes of others. That's been a metaphor up until now, but one day it might just be real. And I think that world of greater connectivity would be a better world as well. That's it for me. Thank you very much.